Return-Path: X-Original-To: apmail-qpid-users-archive@www.apache.org Delivered-To: apmail-qpid-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C9B2010FFD for ; Tue, 14 Jan 2014 21:20:15 +0000 (UTC) Received: (qmail 55405 invoked by uid 500); 14 Jan 2014 21:20:15 -0000 Delivered-To: apmail-qpid-users-archive@qpid.apache.org Received: (qmail 55370 invoked by uid 500); 14 Jan 2014 21:20:14 -0000 Mailing-List: contact users-help@qpid.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@qpid.apache.org Delivered-To: mailing list users@qpid.apache.org Received: (qmail 55361 invoked by uid 99); 14 Jan 2014 21:20:14 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Jan 2014 21:20:14 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of robbie.gemmell@gmail.com designates 209.85.214.41 as permitted sender) Received: from [209.85.214.41] (HELO mail-bk0-f41.google.com) (209.85.214.41) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 14 Jan 2014 21:20:09 +0000 Received: by mail-bk0-f41.google.com with SMTP id u12so277729bkz.0 for ; Tue, 14 Jan 2014 13:19:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=OMWTvoLhn7MKxz5VICccH7hrMeq3zYl3R4xtJmcbT7o=; b=zooNVbquK8ZqzyRZixOpehRzBs8/TPb5AWgndek8jMmzl7HhimZa8NaBzl+D0eyCQQ LV3fpnjDXWo8Kryj8+boqGjB+6lu9CmCQokWQcY4iZsoiCa7f9vnX01tiyDwNGGwoo1x q47QhYMZf6d9U9SDGgQSiUrevvIR8KcfOiiVNGDuZQ+e3sQ4DdxKPqed5PC/ATEas/sN FjnqGhHDHWXJrF8fzOoCLUOD9cUxDZJQnCGlIdf3szmI5PLc5E6NeCD8m62qqmMBOK1Y 5Q/0cTn3zkhshWudhhQhRUbNJvrUg6TGuQ9VIk4tK4mJhZckrUZTEEnkS9a0ar4rzjhJ azgg== MIME-Version: 1.0 X-Received: by 10.205.10.66 with SMTP id oz2mr1014862bkb.142.1389734387412; Tue, 14 Jan 2014 13:19:47 -0800 (PST) Received: by 10.205.20.138 with HTTP; Tue, 14 Jan 2014 13:19:47 -0800 (PST) In-Reply-To: References: Date: Tue, 14 Jan 2014 21:19:47 +0000 Message-ID: Subject: Re: Java broker - message grouping in C++ compatibility mode From: Robbie Gemmell To: "users@qpid.apache.org" Content-Type: multipart/alternative; boundary=20cf302237737e978004eff4c04b X-Virus-Checked: Checked by ClamAV on apache.org --20cf302237737e978004eff4c04b Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hi Helen, I expected that you would see increased performance in some of the tests, as Rob did some more work on improving performance after the 0.16 release. I did expect performance to go down somewhat for case #1 as the defect fix was putting in place some missing synchronization between delivery and acknowlegement steps, though I admit it dropped a bit more than I expected. Giving that some thought, its possible that a change to the threading from some time ago (in 0.16, I think) which generally increased performance all round at the time may in fact hurt it in that scenario. In terms of introducing a new mode of grouping, we would actually be quite open to this as its something we have commented on separately ourselves but never quite got round to implementing; so much to do, so little time, etc. In particular, I have never liked the 'default group' disparity between the two modes and inability to really change it on top of that, so something we talked about was allowing the default group to be controlled for both grouping types such that you could specifically turn one on for the 'standard grouping' which doesnt normally have one, or specifically turn it off for the 'shared groups' that normally do. This would allow them to act a lot more similar, with the distinction betwen them then becoming more about whether the groups can float between consumers or not. Doing the above would allow you to do away with setting the unique messageid on the messages you dont wish to group and enable any consumer to pick them up, which should significantly change the performance seen consuming from a 'shared groups' queue which had a mixture of messages in groups and message not in groups (since a particular subscription becomes much more likely to find a message earlier in the queue if/when the group reset occurs). I think making additional changes outwith those to try and combat a queue full of unique groups in the shared groups case may require larger changes to the queue itself, such as a custom delivery loop to allow altering when the messages can be assigned to the group or even moving away from the current 'group manager' addition to the base queue and more towards a custom queue such as is done currently for the priority and sorted queue implementations. Robbie On 14 January 2014 07:15, Helen Kwong wrote: > Hi Robbie, > > I ran the tests again with a broker built from the latest trunk code and > with a 0.16 client. The performance results are a lot better in almost > every test setup (3-4 times better), except for Test #1 where I build up > 100000 messages of unique grouping values and then see how many messages = we > can process in 5 minutes, using C++ mode (I got ~7800 messages before, an= d > now ~5400). The overall comparisons are about the same, with C++ mode sti= ll > performing significantly worse: > > 1. For the first test, for a regular queue and a default mode queue (with > all messages having no grouping values, all having the same value, and al= l > having different values), I am able to process 100000 messages in around = 3 > and a half minutes. In C++ mode, with all 100000 messages having differen= t > values, I can process only ~5400 in 5 minutes; with all having no value / > the default group, ~44000 messages were processed in 5 minutes (after whi= ch > I stop waiting and just clear the queue). > > 2. For the second test, for a default mode queue, the performance is agai= n > not affected by how many unprocessable messages we have at the front of t= he > queue due to another consumer holding on to those messages' group for a > long time. The time it takes to process 1000 messages of group B after N > messages of group A (which is assigned to another consumer) is about the > same as processing 1000 messages on a regular queue. With a C++ mode queu= e, > the performance still gets worse the more messages of group A we have at > the front of the queue. > > > > Can I enquire what it is about your use case that would preclude use of= a > > higher prefetch? > > > We want to avoid starving messages that can be processed by another > consumer. We have multiple dequeue sessions, each listening to multiple > queues. If we have a higher prefetch, then if there is a long-running > message on a queue, the messages that were prefetched along with it will > have to wait for a long time, even if another listener is available and c= an > process those messages. And if we decide to add the message grouping > configuration, we want to avoid starving a message of a particular group = if > there's an available listener -- if a listener can have 2 messages and 2 > different groups assigned at a time, and processing of the first message > takes a long time, then we might be unfairly starving the second message, > even though other listeners can process it. > > We might be able to tolerate this to some extent if the prefetch is only = 2, > if this is the only way to improve overall throughput. I'll have to discu= ss > this more with my team. Though this still wouldn't solve the problem with > unique keys in C++ mode. > > > > Also, you originally seemed to prefer the idea that > > messages without a group header value would be not be grouped, so is > there > > a particular reason you are leaning towards using the shared groups > > functionality which can't do that? > > > > Basically, the ideal behavior we want is a third mode that combines the > two. If no grouping value is specified, then treat it as though it has no > group; for messages with a grouping value, ensure that only 1 listener ca= n > be processing messages from a particular group at a time, but don't tie a > group to a particular consumer for the lifetime of that consumer. The > reason we want the first part is that we'll have many messages that don't > belong to any group, and we want them to be processed by different > listeners in parallel. The reason we don't want a consumer to be associat= ed > with a group as long as the consumer lives is again to avoid starvation -= - > we don't want a consumer processing a message of group A for a long time > result in starving group B's messages, because it happens so that the fir= st > group B message was processed by that consumer and so group B is assigned > to the consumer, when there's another consumer of the same queue that is > doing nothing and can process the B message. > > So we were thinking of asking you guys if you'd be open to introducing a > third mode, mostly the same as the C++ mode but where no grouping value > means no group, instead of the default group. Another workaround we thoug= ht > of was for any message that doesn't belong to a group, we'll put its > message ID in the grouping key property, so that essentially any listener > can pick it up. That's one reason why we were testing the unique keys cas= e, > though it's also possible for us to get many different grouping values in= a > queue (though not at quite as high a number, e.g., 100000). But in C++ > mode, with both unique values and same-group values at a high depth, we'r= e > seeing decreased performance anyway, so we might not be able to use it. > > Do you have any suggestions for what we should do? Are there any ideas yo= u > have for solving the performance issue with unique keys in C++ mode, so > perhaps we could look into it more? > > Thanks, > Helen > > > > On 13 January 2014 20:34, Helen Kwong wrote: > > > > > Hi Robbie, > > > > > > I am actually still running version 0.16 of the broker. It will take > me a > > > little time to set up the trunk code and rebuild and rerun the > > experiments. > > > Do you think your fix will likely make a difference? > > > > > > For the second case with a long-lived consumer being assigned the gro= up > > of > > > many messages at the head of the queue, I was indeed using a prefetch > of > > 1. > > > I ran it again (with version 0.16 still) with a prefetch of 2 as you > > > suggested, and the dequeue time of the messages at the end was then n= ot > > > affected by the number of unprocessable messages at the beginning of > the > > > queue, about the same as the other test setups I ran. However, I thin= k > > > increasing prefetch to 2 might not work for our use case. > > > > > > For the first case with unique message groups, your explanation makes > > sense > > > and I think I understand it now. Do you think there is still a way to > > > optimize this behavior, so that we don't need to possibly traverse > > through > > > the whole queue whenever a subscription is unassigned from a group? > Since > > > you mentioned that maintaining a fixed pointer to a queue entry would > > > likely lead to memory retention issues, would having a weak reference > be > > a > > > possible option? > > > > > > Thanks, > > > Helen > > > > > > > > > > > > On Sat, Jan 11, 2014 at 7:58 AM, Robbie Gemmell < > > robbie.gemmell@gmail.com > > > >wrote: > > > > > > > Hi Helen, > > > > > > > > Can I check what version of the code you were using? I ask as the > > latest > > > > trunk or 0.26 release branch code is going to be necessary for > > > correctness > > > > and representative testing of the shared groups functionality, due = to > > the > > > > defect fix I previously mentioned making recently. > > > > You can find a nightly build of the trunk broker at: > > > > > > > > > > > > > > https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-Java-Artefact-Relea= se/lastSuccessfulBuild/artifact/trunk/qpid/java/broker/release/and > > > > I would need to build the 0.26 branch as the fix was introduced aft= er > > > > the latest RC. > > > > > > > > In your first case, I think the reason for the difference between t= he > > > > default group and the unique group is also likely to be tied to the > > > > 'findEarliestAssignedAvailableEntry' behaviour you mention later in > > your > > > > mail. For the default group case, that next message always going to > be > > a > > > > message near the front of the queue. For the unique group, there is= nt > > > > actually going to be a message which matches, but it looks like it > will > > > > currently be checking every message to determine that and doing so > > under > > > > the synchronization, and thus probably preventing other deliveries > > > > occurring at the time. That isnt a problem in the non-shared case > > because > > > > there isnt a need to synchronise the GroupManager as a whole, and > even > > > > going beyond that its also highly unlikely it would need to check a= s > > many > > > > messages before finding a match due to the signifcant difference in > how > > > > groups become associated with a particular subscription in the > > non-shared > > > > case. > > > > > > > > In your second case, your explanation seems likely and I think this > > case > > > > really reduces to just being a variant of the above behaviour. The > > > > particular issue is that one could argue it shouldnt need to be doi= ng > > the > > > > 'findEarliestAssignedAvailableEntry' task all that often in this ca= se > > if > > > > you have a long-lived consumer, and so your mention of this makes m= e > > > think > > > > you are using a prefetch of 1. Using a prefetch of 1 currently mean= s > > that > > > > the delivery state associated with the shared group effectively > becomes > > > > empty after each message, because messages are only fully added to > the > > > > group when they become acquired by a particular subscription, and > they > > > cant > > > > be acquired until the previous message is consumed (or perhaps > slightly > > > > confusingly, explicitly not-consumed). If so, I expect it could be > very > > > > interesting to run this case again with a prefetch of 2 or more. Th= e > > > > obvious tradeoff with increasing prefetch is that a particular > consumer > > > > could then be assigned up to groups at a given point, > though > > > > likely not in your test case due to the large contiguous blocks of > > > messages > > > > for each group. > > > > > > > > I'm not sure that the suggestion to track the first message in the > > group > > > > would really work currently, due to the way the underlying queue da= ta > > > > structure works. Maintaining a fixed pointer into it like that is > > likely > > > to > > > > lead to some undesirable memory retention issues, based on a relate= d > > but > > > > far simpler case I fixed previously in a similar structure elsewher= e > in > > > the > > > > broker. Looking at the way messages become assigned to a group in t= he > > > > shared group case may be a more viable path to handling your second > > case > > > > more gracefully. The unique groups from you first case would still > need > > > > something different though, as neither of these routes would really > > help > > > > there. > > > > > > > > Robbie > > > > > > > > On 11 January 2014 01:03, Helen Kwong wrote: > > > > > > > > > Hi Robbie, > > > > > > > > > > I did some more testing to see whether message grouping will work > for > > > us, > > > > > and compared the dequeue performance of a queue using message > > grouping > > > in > > > > > default Java mode, a queue using C++ mode, and a queue not using > > > message > > > > > grouping. I found that when I use C++ mode, the performance can b= e > > much > > > > > worse than in other comparable setups, and was wondering if you > could > > > > help > > > > > me understand why. > > > > > > > > > > 1. In one test, I have multiple listeners to a queue, enqueue > 100000 > > > > > messages to it, and see how many messages are processed in 5 > > minutes. I > > > > > have these different setups: > > > > > > > > > > - C++ mode queue with each message having a unique identifier > > > > > > > > > > - C++ mode queue with all messages having no grouping identifier > (so > > > all > > > > > belong to the default group) > > > > > > > > > > - default mode queue with each message having a unique grouping > > > > identifier > > > > > > > > > > - default mode queue with all messages having no grouping > identifier > > > > > > > > > > - default mode queue with all messages having the same grouping > > > > identifier > > > > > > > > > > - regular queue without a group header key configured > > > > > > > > > > All setups except for the first had about 35K - 39K messages > > processed, > > > > but > > > > > for the first setup, there were under 8000 messages processed. Wh= at > > > could > > > > > explain this big difference? I=92ve looked at the two grouping mo= des=92 > > > > > implementations of MessageGroupManager and see that C++ mode uses > > > > > synchronized methods rather than a ConcurrentHashMap as in defaul= t > > > mode, > > > > so > > > > > I=92d guess there might be more contention because of that, but a= t > the > > > same > > > > > time I can=92t see why, in C++ mode, having a unique identifier f= or > > each > > > > > message results in throughput that is so much worse than having a= ll > > > > > messages in the default group. > > > > > > > > > > > > > > > 2. I also wanted to see the impact of having many messages at the > > head > > > of > > > > > the queue that a listener can=92t process because the messages be= long > > to > > > a > > > > > group assigned to another consumer. E.g., have 10000 messages of > > group > > > A, > > > > > followed by 1000 messages of group B, and listener 1 is holding o= n > to > > > the > > > > > first A message for a long time -- see how long it will take > > listener 2 > > > > to > > > > > process all the B messages. In this case C++ mode has performance > > that > > > > > degrades as the number of unprocessable group A messages at the > front > > > of > > > > > the queue increases, whereas default mode's performance is > > unaffected, > > > > > about the same as processing 1000 messages on a regular queue. > > > > > > > > > > My rough guess from looking at DefinedGroupMessageGroupManager is > > that > > > > > whenever listener 2 is done with a group B message, the state > change > > > > > listener triggers the Group.subtract() to reset pointers for othe= r > > > > > subscriptions and consequently > findEarliestAssignedAvailableEntry(). > > > This > > > > > then has to iterate through all the group A messages before it > finds > > > the > > > > B > > > > > message. Do you think this is the reason for the results I see? > > > > > > > > > > If so, is the idea here that other subscriptions of the queue cou= ld > > > have > > > > > skipped over the messages of a group while the group was assigned > to > > > some > > > > > subscription S, so we need to tell them to set their pointers bac= k? > > If > > > > that > > > > > is indeed the idea, would it be possible to optimize it such that > > when > > > a > > > > > group A is assigned to S and S gets its first message of the grou= p, > > we > > > > > store what that first A message / queue entry is. Then when S is > done > > > > with > > > > > the last A message, we can tell other subscriptions to go back to > > that > > > > > first entry, without having to iterate through the queue? > > > > > > > > > > Thanks a lot for your help! > > > > > > > > > > Helen > > > > > > > > > > > > > > > On Tue, Jan 7, 2014 at 8:46 PM, Robbie Gemmell < > > > robbie.gemmell@gmail.com > > > > > >wrote: > > > > > > > > > > > ...and just to be super clear, though I think it it is mentione= d > > > > > correctly > > > > > > in the docs this time, the 'default group' concept does not app= ly > > in > > > > the > > > > > > regular / 'non shared' grouping mode. Messages that dont specif= y > a > > > > group > > > > > > key value in that mode are simply not grouped in any way. > > > > > > > > > > > > On 8 January 2014 04:41, Robbie Gemmell < > robbie.gemmell@gmail.com> > > > > > wrote: > > > > > > > > > > > > > > > > > > > > On 8 January 2014 04:33, Helen Kwong > > wrote: > > > > > > > > > > > > > >> Oh I see, I thought what you meant was that I could only alt= er > > the > > > > > > default > > > > > > >> group in shared-groups mode starting with 0.24. > > > > > > > > > > > > > > > > > > > > > No, I just missed that you said 0.16 and assumed 0.24 was the > > > version > > > > > you > > > > > > > were using . You could always change it, just in more limited > > ways > > > in > > > > > > > earlier releases. > > > > > > > > > > > > > > To make sure I'm > > > > > > >> understanding this correctly -- changing the the default > message > > > > group > > > > > > >> name > > > > > > >> to something else in C++ mode won't change the serial > processing > > > > > > behavior > > > > > > >> I > > > > > > >> saw, right? > > > > > > > > > > > > > > > > > > > > > Correct > > > > > > > > > > > > > > > > > > > > >> Messages without a group identifier will still be considered > to > > > > > > >> be in a group -- rather than no group -- and they cannot be > > > > processed > > > > > by > > > > > > >> multiple consumers concurrently? > > > > > > >> > > > > > > >> > > > > > > > Yes. In the C++/shared-groups mode every message is considere= d > to > > > be > > > > > in a > > > > > > > group, it is just a case of whether the message specifies tha= t > > > group > > > > or > > > > > > > instead gets put into the default group. > > > > > > > > > > > > > > > > > > > > > > > > > > > >> Thanks, > > > > > > >> Helen > > > > > > >> > > > > > > >> > > > > > > >> On Tue, Jan 7, 2014 at 8:22 PM, Robbie Gemmell < > > > > > > robbie.gemmell@gmail.com > > > > > > >> >wrote: > > > > > > >> > > > > > > >> > I just noticed you said you were using 0.16, somehow gloss= ed > > > over > > > > it > > > > > > >> > originally and only noticed the 0.24 in the doc URL (its > many > > > > hours > > > > > > past > > > > > > >> > time I was asleep, I might be getting tired). > > > > > > >> > > > > > > > >> > Realising that, I should add that prior to 0.22 the only w= ay > > to > > > > > alter > > > > > > >> the > > > > > > >> > default group in the shared-groups mode from 'qpid.no-grou= p' > > to > > > > > > >> something > > > > > > >> > else would have been via the 'qpid.default-message-group' > > queue > > > > > > declare > > > > > > >> > argument when using an AMQP client to create the queue > > > originally, > > > > > and > > > > > > >> for > > > > > > >> > 0.22 itself only that and the system property approach I > > > mentioned > > > > > > would > > > > > > >> > work. > > > > > > >> > > > > > > > >> > Robbie > > > > > > >> > > > > > > > >> > On 8 January 2014 04:03, Helen Kwong > > > > wrote: > > > > > > >> > > > > > > > >> > > Hi Robbie, > > > > > > >> > > > > > > > > >> > > I see. Thanks for the quick response and explanation! > > > > > > >> > > > > > > > > >> > > Helen > > > > > > >> > > > > > > > > >> > > > > > > > > >> > > On Tue, Jan 7, 2014 at 7:43 PM, Robbie Gemmell < > > > > > > >> robbie.gemmell@gmail.com > > > > > > >> > > >wrote: > > > > > > >> > > > > > > > > >> > > > Hi Helen, > > > > > > >> > > > > > > > > > >> > > > The short answer to your question is that it is the > > > > > documentation > > > > > > >> which > > > > > > >> > > is > > > > > > >> > > > incorrect, and the behaviour you are seeing is expecte= d. > > > > > > >> > > > > > > > > > >> > > > The long answer is, when that documentation was > composed a > > > > > segment > > > > > > >> was > > > > > > >> > > > missed out indicating this, and needs to be added to t= he > > > docs. > > > > > The > > > > > > >> > > > behaviour listed for when no group is specified is onl= y > > true > > > > of > > > > > > the > > > > > > >> > > > 'non-shared' groups supported by the Java broker, in t= he > > > > > > C++/shared > > > > > > >> > group > > > > > > >> > > > mode any messages recieved without an explicit group > value > > > are > > > > > all > > > > > > >> > > assigned > > > > > > >> > > > to a default group of 'qpid.no-group'. This is as per > the > > > > > > behaviour > > > > > > >> of > > > > > > >> > > the > > > > > > >> > > > C++ broker itself, which is explained in the C++ broke= r > > docs > > > > at > > > > > > the > > > > > > >> end > > > > > > >> > > of > > > > > > >> > > > the following page > > > > > > >> > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > http://qpid.apache.org/releases/qpid-0.24/cpp-broker/book/Using-message-g= roups.html > > > > > > >> > > > . > > > > > > >> > > > For the 0.24 Java broker, this default shared group ca= n > be > > > > > changed > > > > > > >> > > > broker-wide using the Java system property > > > > > > >> > > > 'qpid.broker_default-shared-message-group', or can be > > > > overriden > > > > > > for > > > > > > >> an > > > > > > >> > > > individual queue during creation programatically via > AMQP > > > > > clients > > > > > > or > > > > > > >> > the > > > > > > >> > > > management interfaces through use of the argument > > > > > > >> > > > 'qpid.default-message-group' or > > 'messageGroupDefaultGroup'. > > > > > > >> > > > > > > > > > >> > > > I coincidentally happened to have fixed a defect with > the > > > > shared > > > > > > >> groups > > > > > > >> > > > functionality last night on trunk. Its not yet include= d > in > > > the > > > > > > >> imminent > > > > > > >> > > > 0.26 release, though I am about to request whether tha= t > is > > > > > > possible. > > > > > > >> > > > https://issues.apache.org/jira/browse/QPID-5450 > > > > > > >> > > > > > > > > > >> > > > Robbie > > > > > > >> > > > > > > > > > >> > > > On 8 January 2014 02:43, Helen Kwong < > > helenkwong@gmail.com> > > > > > > wrote: > > > > > > >> > > > > > > > > > >> > > > > Hi, > > > > > > >> > > > > > > > > > > >> > > > > I use the Java broker and client, version 0.16, and = am > > > > > > considering > > > > > > >> > > using > > > > > > >> > > > > the message grouping feature ( > > > > > > >> > > > > > > > > > > >> > > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > http://qpid.apache.org/releases/qpid-0.24/java-broker/book/Java-Broker-Qu= eues.html#Java-Broker-Queues-OtherTypes-Message-Grouping > > > > > > >> > > > > ). > > > > > > >> > > > > From testing I've done, there seems to be a bug with > the > > > C++ > > > > > > >> > > > compatibility > > > > > > >> > > > > model, and I'm wondering if this is a known issue. > > > > > Specifically, > > > > > > >> in > > > > > > >> > my > > > > > > >> > > > test > > > > > > >> > > > > I have a queue configured to use a group header fiel= d > > with > > > > > > >> > > > > "qpid.group_header_key" and C++ mode with > > > > > > "qpid.shared_msg_group", > > > > > > >> > and > > > > > > >> > > > have > > > > > > >> > > > > multiple listeners to the queue. Each listener will > > sleep > > > > for > > > > > a > > > > > > >> short > > > > > > >> > > > > amount of time when it receives a message before > > > returning. > > > > I > > > > > > then > > > > > > >> > > > enqueue > > > > > > >> > > > > 10 messages that do not have a value in the group > header > > > > field > > > > > > to > > > > > > >> the > > > > > > >> > > > > queue. Since the doc says that messages without a > value > > in > > > > the > > > > > > >> > grouping > > > > > > >> > > > > header will be delivered to any available consumer, > the > > > > > > behavior I > > > > > > >> > > expect > > > > > > >> > > > > is that the messages will be processed in parallel, > > i.e., > > > > when > > > > > > >> > > listener 1 > > > > > > >> > > > > is holding on to a message and sleeping, listener 2 > can > > > > > receive > > > > > > >> > another > > > > > > >> > > > > message from the queue. But what I see is that the > > > messages > > > > > are > > > > > > >> > > processed > > > > > > >> > > > > serially -- message 2 won't be received by some thre= ad > > > until > > > > > > >> message > > > > > > >> > 1 > > > > > > >> > > is > > > > > > >> > > > > done. When I use the default mode instead of C++ mod= e, > > > then > > > > I > > > > > > get > > > > > > >> the > > > > > > >> > > > > parallel processing behavior. > > > > > > >> > > > > > > > > > > >> > > > > Is this is a known bug, and is there a fix for it > > already? > > > > > > >> > > > > > > > > > > >> > > > > Thanks, > > > > > > >> > > > > Helen > > > > > > >> > > > > > > > > > > >> > > > > > > > > > >> > > > > > > > > >> > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --20cf302237737e978004eff4c04b--