Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B6578200C80 for ; Thu, 11 May 2017 03:10:27 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B4E9C160BB4; Thu, 11 May 2017 01:10:27 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D4455160B9C for ; Thu, 11 May 2017 03:10:26 +0200 (CEST) Received: (qmail 88452 invoked by uid 500); 11 May 2017 01:10:26 -0000 Mailing-List: contact users-help@qpid.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@qpid.apache.org Delivered-To: mailing list users@qpid.apache.org Received: (qmail 88440 invoked by uid 99); 11 May 2017 01:10:25 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 May 2017 01:10:25 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 4A2C31A021F for ; Thu, 11 May 2017 01:10:25 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.397 X-Spam-Level: X-Spam-Status: No, score=-0.397 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-2.796, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 4ILNDcCjkw4h for ; Thu, 11 May 2017 01:10:22 +0000 (UTC) Received: from mail-wr0-f181.google.com (mail-wr0-f181.google.com [209.85.128.181]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 44D455F202 for ; Thu, 11 May 2017 01:10:22 +0000 (UTC) Received: by mail-wr0-f181.google.com with SMTP id z52so8678101wrc.2 for ; Wed, 10 May 2017 18:10:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=fgyuro+BlLmF0SzY8ptLb7MLUoEjIho4ZJ2VFhPx7Xk=; b=RfboPVMSxw1gmz/a/7iVw5tAKlcSlVLjEiM/EhBUQRttgSHwOV3//DfZIEbzDa9UZO GQRjgmfbYXdlbZTb0o0uoB3btd+aYHZ6MeycC9R6BBTDu+o8lHEYvn1HELIGiCsuHphe Pk6gBu1wr9vu8NlwuBVL9O1hWDuhYK+/W4oi7nJRj4k0e6vNdX+KbT9HMsByRfUUf3+e rdNrXxCjxOVr3IPZMHIGArljlSf6gcb70+FI+xa66ZF58mxf2YUtvoXYyE3NN1KrkB2p i9FxmOqyS0Xzdjs40PJDCCwWUCJzkIhJ/6gj7KyZ/dbtIx47WIIcYo9kOkQ2xBvArWa6 lOHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=fgyuro+BlLmF0SzY8ptLb7MLUoEjIho4ZJ2VFhPx7Xk=; b=riUxVuJ5BJ/lFdxwJIxas5Xmoj0Le5llFDToYIVjUBbGYn+GIfdYk+ucAfpY9HuwcW hR44BUhNYP/Df0ah5jfKL+UT6jn7G5pEeLrClutTrWVlw43OQwBOvndLvT5ct6UZ18qf yAW92vRs094iZP2JO52aU8a9L4RNSVlTnGKirmBlNnHq06JHHnkN3EX2v2BxI0dhqumy iXAQfYgQyS3uRMM4pPRLL+/0gBb1TGeY+TFg2dlptPXxoVB1oT+cvo7I2Y5IUd4mxoWj CPqEchH9rNXRTzruSuOjpJ4VAuDO/vc/88MFhm8I+2hh/py3OsQw96H0R7hql+eh23sQ 1wQg== X-Gm-Message-State: AODbwcCWV78BuP1EO1D/qlc2oA2adx45ZMxp0wKqM2qyUmA4ffyZbo8j kkVDArJF7mA+HpUiKZmQWoZmlcvS8995 X-Received: by 10.223.146.193 with SMTP id 59mr5155311wrn.165.1494465016191; Wed, 10 May 2017 18:10:16 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Dan Langford Date: Thu, 11 May 2017 01:10:05 +0000 Message-ID: Subject: Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently To: "users@qpid.apache.org" Content-Type: multipart/alternative; boundary=94eb2c0d9bee6b673f054f35403c archived-at: Thu, 11 May 2017 01:10:27 -0000 --94eb2c0d9bee6b673f054f35403c Content-Type: text/plain; charset=UTF-8 Will you let me know if a Jira ticket is made as a result of this so I can track which version gets an adjustment? I did more testing around this and am convinced this is what caused our Broker to get a Out Of Memory for Direct Memory. We saw our broker crashing and our primary client of the large backed up queue also crashing due to Memory issues. In my testing those problems went away with a prefetch of 1. I think that when all the hundreds of thousands of messages were prefetched both the client and broker where holding them in Memory and running out. With prefetch = 1 we were able to push around millions with very little problems Thanks. I'm anxious for a Qpid JMS client I can encourage my customers to upgrade to to avoid this in the future. Let me know if you would like me to test any bug fixes On Fri, May 5, 2017 at 8:34 AM Dan Langford wrote: > Thanks for the replies and the work around. Getting this working will be > great as we mostly use the competing consumer approach here. When > somebodies queue gets backed up to half a million messages they want to > just scale out their instances in CloudFoundry to increase throughput. > On Fri, May 5, 2017 at 7:09 AM Rob Godfrey > wrote: > >> On 5 May 2017 at 14:14, Robbie Gemmell wrote: >> >> > I can also reproduce this. I believe it is a deficiency in how/when >> > the client handles granting more link credit, and it will show >> > particularly badly in the scenario described where the broker is able >> > to significantly/totally use the existing credit between processing of >> > individual messages and there is a backlog of queued messages to >> > continuously feed the scenario. >> > >> > To work around the issue and achieve the effect you are looking for, >> > of balancing the backlog between multiple consumers when some come up >> > later than others, you will need to reduce the prefetch setting to 0 >> > or 1. >> > >> > >> To be clear then, it is a bug in the JMS client rather than the broker :-) >> >> -- Rob >> >> >> > Robbie >> > >> > On 5 May 2017 at 10:07, Keith W wrote: >> > > Hi Dan >> > > >> > > Thanks for the comprehensive report. I can reproduce what you see and >> > > confirm there appears to be a bug. I'll hope to be able to take a >> > > closer look later today or Monday and get back to you with more >> > > information. >> > > >> > > Keith. >> > > >> > > On 4 May 2017 at 23:39, Dan Langford wrote: >> > >> So over the past few weeks we have had a huge influx of messages on >> our >> > >> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging >> > >> portion) and when one of our clients struggled scaling their >> > application up >> > >> it got us looking at prefetch. we thought it was odd that all 500k >> > messages >> > >> in the queue were prefetched and it was due to the prefetch that when >> > they >> > >> scaled out the new connections could help with those messages they >> could >> > >> only acquire new messages. >> > >> >> > >> so i started running tests on a local instance of qpid java 6.1.2 >> and i >> > was >> > >> able to duplicate the behavior which seems odd. >> > >> >> > >> Setup. >> > >> my java code will use the JMS api to create a consumer, >> receiveNoWait a >> > >> message, acknowledge or commit the message, then Thread.sleep for a >> bit >> > to >> > >> look at the Qpid Java Brokers web interface for stats around >> prefetched >> > >> messages. >> > >> >> > >> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url >> > >> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on >> the >> > >> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);) >> > >> After the first message came in the web interface showed the queue >> size >> > >> decrement and 19 messages pre-fetched >> > >> after second message queue size decremented again and 28 messages are >> > >> pre-fetched >> > >> after third message queue size also decremented and 37 messages >> > prefetched >> > >> so on and so forth >> > >> >> > >> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param >> > >> maxprefetch='10' >> > >> After the first message came in the web interface showed the queue >> size >> > >> decrement and 10 messages pre-fetched >> > >> after second message queue size decremented again and still 10 >> messages >> > are >> > >> pre-fetched >> > >> after third message queue size also decremented and still 10 messages >> > >> prefetched >> > >> so on and so forth >> > >> >> > >> could it be a link credit thing? could i not be understanding >> prefetch? >> > >> maybe jms.prefetchPolicy is not the same as maxprefetch? >> > >> >> > >> Frame logs are here >> > >> https://pastebin.com/4NHGCWEa >> > > >> > > --------------------------------------------------------------------- >> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org >> > > For additional commands, e-mail: users-help@qpid.apache.org >> > > >> > >> > --------------------------------------------------------------------- >> > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org >> > For additional commands, e-mail: users-help@qpid.apache.org >> > >> > >> > --94eb2c0d9bee6b673f054f35403c--