qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Praveen M <lefthandma...@gmail.com>
Subject Re: DerbyDB vs BerkeleyDB using the Java Broker
Date Mon, 19 Dec 2011 18:57:54 GMT
Hi Robbie,

             I tried grabbing the latest changes and re-running my tests. I
didn't see the number that you mentioned in your mail. :( It kinda remains
at what I had mentioned in my earlier email.

Can you please tell me which changelist# you ran against so that I can try
again?

I'm running with allocated 4GB memory for the Broker and don't see any
resource constraints in terms of memory and CPU.
My test is on a box with 12GB Ram and 12 CPU cores.

I think I might be missing something. Did you do any specific setting
changes to your broker config, and were the results that you posted from
running the tests that I emailed?

Thanks,
Praveen

On Mon, Dec 19, 2011 at 10:45 AM, Praveen M <lefthandmagic@gmail.com> wrote:

> Hi Robbie,
>
> Thank you for the mail. I will try using the latest changes to grab the
> recent
> performance tweaks and run my tests over again.
>
> Yep, I made the test enqueue and dequeue at the same time, as I was trying
> to simulate
> something close to how it'd work in production. I do know that the dequeue
> throughput rate
> is not a very accurate one. :) But yeah, like you said, all I was trying
> to check is more of
> which one performs better berkeley/derby.
>
> Given that derby outperforms berkeley for some use cases, what would be
> your recommendation to use as a
> persistant store? I understand that berkeley is used more widely than
> derby in production by
> various users of qpid. Would that mean berkeley can be expected to be a
> sheer more
> robust a product as it might have been tested more thorough??
>
> Would you have a recommendation for picking one over the other as the
> MessageStore?
>
> Thanks to you and the rest of the team for the work that you guys are
> putting together towards performance tuning the product.
> -
> Praveen
>
>
> On Sun, Dec 18, 2011 at 6:31 PM, Robbie Gemmell <robbie.gemmell@gmail.com>wrote:
>
>> Hi Praveen,
>>
>> I notice both your tests actually seem to enqueue and dequeue messages
>> at the same time (since you commit per publish and the message
>> listeners will already be recieving a message which then gets commited
>> by the next publish due to the single session in use, leaving a
>> message on the queue at the end), so you might not be getting the
>> precise number you are looking for in the first test, but that doesnt
>> really change the relative results it gives.
>>
>> I didnt see quite the same disparity when I ran the tests on my box,
>> but the Derby store did still win significantly (giving ~2.3 vs 4.4ms
>> and 350 vs 600msg/s best cases), though there have been some changes
>> made on trunk since your runs to massively improve transient messaging
>> performance of the Java broker which may also have influenced things
>> here a little. Either way, although it makes the test suite runs take
>> significantly longer it would seem that in actual use the Derby store
>> is currently noticably faster in at least some use cases. As I have
>> said previously our attention to performance of the Java broker has
>> been lacking for a while, but we are going to spend some quality time
>> looking at performance testing very soon now, and given the recent
>> transient improvements will undoubtedly be looking at persistent
>> performance going forward as well.
>>
>> Robbie
>>
>> On 3 December 2011 00:45, Praveen M <lefthandmagic@gmail.com> wrote:
>> > Hi,
>> >
>> >    I've been trying to benchmark the BerkeleyDb against DerbyDb with the
>> > java broker to find which DB is more perform-ant against the java
>> broker.
>> >
>> > I have heard from earlier discussing that berkeleydb runs faster in the
>> > scalability tests of Qpid. However, some of my tests showed the
>> contrary.
>> >
>> > I had setup BDB using the "ant build release-bin -Dmodules.opt=bdbstore
>> > -Ddownload-bdb=true" as directed in Robbie's earlier email in a similar
>> > topic thread.
>> >
>> > I tried running two tests in particular which are of interest to me
>> >
>> > Test 1)
>> > Produce 1000 messages to the broker in transacted mode such that after
>> every
>> > enqueue you commit the transaction.
>> >
>> > The time taken to enqueue a message in transacted mode from the above
>> test
>> > is approx 5-8 ms for derbyDb and about 18-25 ms in the case of
>> BerkeleyDb.
>> >
>> >
>> > Test 2)
>> > Produce 1000 messages with auto-ack mode, with a consumer already setup
>> for
>> > the queue.
>> > When the 1000th message is processed, calculate it's latency by doing
>> > Latency =  (System.currentTimeInMillis() - message.getJMSTimeStamp()).
>> >
>> > Try to compute an *approximate* dequeue rate by doing
>> > numberOfMessageProcessed/Latency.
>> >
>> > In the above test, the results I got were such that,
>> >
>> > DerbyDb - 300 - 350 messages/second
>> > BDB - 40 - 50 messages/second
>> >
>> >
>> > I ran the tests against trunk(12/1)
>> >
>> > My Connection to Qpid has a max prefetch of 1 (as my use case requires
>> this)
>> > and has tcp_nodelay set to true.
>> >
>> > I have attached the tests that I used for reference.
>> >
>> > Can someone please tell me if I'm doing something wrong in the above
>> tests
>> > or if there is an additional configuration that I'm missing?
>> >
>> > Or are these results valid..? If valid, it will be great if the
>> difference
>> > could be explained.
>> >
>> > Hoping to hear soon.
>> >
>> > Thank you,
>> > --
>> > -Praveen
>> >
>> >
>> > ---------------------------------------------------------------------
>> > Apache Qpid - AMQP Messaging Implementation
>> > Project:      http://qpid.apache.org
>> > Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> --
> -Praveen
>



-- 
-Praveen

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message