activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Sitsky <s...@nuix.com>
Subject Re: Optimising PrefetchSubscription.dispatchPending() ideas
Date Tue, 12 Feb 2008 06:35:24 GMT
Hi Rob,

I was using a version that did have you most recent changes.

To give you a better idea of what I meant, I hacked up some changes 
which you can see from the attached patch.

The idea is instead of going through the pending list performing the 
same computations over and over again on messages which have been 
already been handled by other subscriptions, to move them to another list.

For a particular run, this reduced my application run-time from 47 
minutes to 38 minutes.

I'm sure there are better ways of implementing this - but do you see 
what I mean?

Cheers,
David

Rob Davies wrote:
> David,
> 
> which release are you working on ? There was a change last night in 
> Queue's that might affect the cpu usage.
> On Feb 8, 2008, at 5:11 PM, David Sitsky wrote:
> 
>> In my application, I have noticed with 20 consumers, the broker's CPU 
>> is going through the roof, with many threads in 
>> PrefetchSubscription.dispatchPending().  With my consumers, it might 
>> be 500-1000 messages dispatched before a commit() can be called.  With 
>> 20 consumers, this means there can be a build-up of 20,000 uncommited 
>> messages lying around the system, let-alone the new messages which are 
>> being pumped into the system at a furious rate.  Not nice I know, but 
>> I don't have much choice about it at the moment, for 
>> application-specific reasons.
>>
>> As you can imagine, I can have some very big pending queue sizes - 
>> sometimes 100,000 in size.
>>
>> I am experimenting with different prefetch sizes which may help, but I 
>> suspect every time a prefetch thread is trying to dispatch a message, 
>> it might have to iterate through very large numbers of deleted 
>> messages or messages which have been claimed by other subscribers 
>> before it finds a matching message.  Multiply this by 20, and there is 
>> a lot of CPU being consumed.  This worries me for scalability reasons 
>> - if I want to keep bumping up the number of consumers.
>>
>> I'm not sure what the best way of improving this is... is it possible 
>> when we call dispatchPending() to not call 
>> pendingMessageCursor.reset() perhaps?
> reset() is a nop for the QueueStoreCursor :(
>>
>>
>> I'm trying to understand why we need to reset the cursor, when 
>> presumably all off the messages we have gone over before in a previous 
>> dispatchPending() call are either deleted, dispatched or locked by 
>> another node, and therefore don't need to be checked again (or we 
>> check if we reach the end of the cursor list)?
> I
>>
>>
>> I realise if a transaction is rolled back, that a message that was 
>> previously locked by another consumer may be freed.  There are 
>> probably message ordering isues too.
>>
>> Is it possible when we are iterating through the cursor if we find a 
>> node locked by another consumer to perhaps move it to the end of the 
>> cursor (or another list) and check it only if we found no matches?
>>
>> I'm sure there are a lot of complexities here I am not aware of - but 
>> I am curious what others think.
>>
>> Doing this sort of chance should reduce the latencies and CPU usage of 
>> the broker significantly.
>>
>> Cheers,
>> David
>>
>>


-- 
Cheers,
David

Nuix Pty Ltd
Suite 79, 89 Jones St, Ultimo NSW 2007, Australia    Ph: +61 2 9280 0699
Web: http://www.nuix.com                            Fax: +61 2 9212 6902

Mime
View raw message