qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "mark yoffe" <mark.yo...@gmail.com>
Subject Re: multiple listener ahsring a public queue
Date Thu, 20 Nov 2008 16:54:27 GMT
i would like to get that patch very much

if you can please send me the patch and point me where i can use it

thank you very much

Best Regrds


On Thu, Nov 20, 2008 at 6:43 PM, Gordon Sim <gsim@redhat.com> wrote:

>  mark yoffe wrote:
>> Hi
>> i have been running scenario where several consumers connect to the same
>> public queue (topic)
>> the consumers connect via subscription manager.
>> i have run several scenarios where multiple listeners use the same
>> connection (different sessions) and scenario where listeners each have
>> their
>> own connection
>> all of this has been run on TRUNC versions
>> a while ago i started experiencing a serious down downgrade in performance
>> when using such a scenario
>> in the past the use of multiple listeners enabled me to enhance
>> performance
>> but currently it appears that i can achieve a faster performance while
>> using
>> 1 listener application
>> in the past when several listeners used a shared queue - the message flow
>> to between the consumers was not even  (if using x listener one would
>> expect
>> to receive 1/x messages per listener)
>> this problem/feature was corrected to support a pretty even distribution
>> of
>> the task
>> as far as i can tell around the time this enhancement was made(it might
>> not
>> be connected but..)
>> the performance started going down, and it is not a small difference
>> using one listener i can achieve a speed of X for 100k messages per second
>> using five listeners i experience a serious down grade of performance to
>> ~5X
>> for 100k messages per second
>> this was not the case in the past and i am at a loss regarding this
>> problem
>> does anyone know , how this can be resolved
> Sounds like https://issues.apache.org/jira/browse/QPID-1280 for which I
> don't have a quick fix unfortunately as yet. It was caused by some necessary
> changes to the locking (specifically to prevent deadlocks when using RDMA,
> but this may also have improved the 'fairness' of allocation).
> I can give you a one-line patch that should revert back to the earlier
> performance if you like. I have some ideas for a proper fix but they were
> going to be too involved for the M4 timeframe I'm afraid.
> Adding more publishers also helps I found.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message