activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Tully <>
Subject Re: Backlog data causes producers to slow down.
Date Mon, 12 Sep 2011 16:00:33 GMT
for the queue case, with backlogs (when the consumers don't keep up)
you may want to experiment with <kahaDB
concurrentStoreAndDispatchQueues="false" />

On 12 September 2011 01:08, bbansal <> wrote:
> Hello folks,
> I am evaluating ActiveMQ for some simple scenarios. The web-server will push
> notifications to the queue/topic to be consumed by one or many consumers.
> The one requirement is web-server should not get impacted or should be able
> to write at their speed even if consumers goes down etc.
> ActiveMQ is performing very well with about 1500 QPS (8 producer thread,
> persistence, kaha-db) Kahadb parameters being used are
> enableJournalDiskSyncs="false" indexWriteBatchSize="1000"
> enableIndexWriteAsync="true
> The system works great if consumers are all caught up, the issue is when I
> am trying to test scenarios with backlogged data (keep running producer for
> 30 mins or so) and then start consumers. Consumer show good consumption rate
> but the producers (8 threads same as before) cannot do more than 120 QPS.
> This is a drop of more than 90% degradation.
> I ran profiler on the code (Jprofiler) and looks like the writers are
> getting stuck for write locks while competing with the removeAsyncMessages()
> or call to clear messages which got acknowledged from clients etc.
> I saw similar complaints for some other folks, Is there some settings we can
> use to fix the problem ? I dont want to degrade any guarantee level (eg.
> disable acks etc).
> Would be more than happy to run experiments with different settings if folks
> have some suggestions.
> --
> View this message in context:
> Sent from the ActiveMQ - User mailing list archive at


View raw message