Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 6F4AD200B64 for ; Tue, 2 Aug 2016 20:10:33 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 6E01B160A76; Tue, 2 Aug 2016 18:10:33 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 605B4160A65 for ; Tue, 2 Aug 2016 20:10:32 +0200 (CEST) Received: (qmail 80075 invoked by uid 500); 2 Aug 2016 18:10:31 -0000 Mailing-List: contact users-help@qpid.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@qpid.apache.org Delivered-To: mailing list users@qpid.apache.org Received: (qmail 80059 invoked by uid 99); 2 Aug 2016 18:10:31 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 02 Aug 2016 18:10:31 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id D3E3C1857DA for ; Tue, 2 Aug 2016 18:10:30 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.279 X-Spam-Level: * X-Spam-Status: No, score=1.279 tagged_above=-999 required=6.31 tests=[HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id ed_utT-yGUNw for ; Tue, 2 Aug 2016 18:10:27 +0000 (UTC) Received: from DUB004-OMC4S9.hotmail.com (dub004-omc4s9.hotmail.com [157.55.2.84]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 563F15F39E for ; Tue, 2 Aug 2016 18:10:26 +0000 (UTC) Received: from DUB112-W95 ([157.55.2.72]) by DUB004-OMC4S9.hotmail.com over TLS secured channel with Microsoft SMTPSVC(7.5.7601.23008); Tue, 2 Aug 2016 11:10:19 -0700 X-TMN: [mE5AGwPVQG5VQrYomaXVwne9XxIg1zTsGdn9t0NwZTE=] X-Originating-Email: [adelboutros@live.com] Message-ID: Content-Type: multipart/alternative; boundary="_ebed8018-8ff1-4a5c-a55a-750845a76a37_" From: Adel Boutros To: "users@qpid.apache.org" Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0 Date: Tue, 2 Aug 2016 20:10:18 +0200 Importance: Normal In-Reply-To: References: ,<91a97928-ecea-5fdf-1e0f-f7a7b4079357@redhat.com>,,,, MIME-Version: 1.0 X-OriginalArrivalTime: 02 Aug 2016 18:10:19.0289 (UTC) FILETIME=[20D61090:01D1ECE9] archived-at: Tue, 02 Aug 2016 18:10:33 -0000 --_ebed8018-8ff1-4a5c-a55a-750845a76a37_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hello Ted=2C Gordon=2C When I say the JMS producers are sending synchronously=2C I mean they don't= set any options to the connection URL such as jms.forceAsyncSend. So I gue= ss this means the producer will wait for the settlement before sending mess= age X + 1. When I say it fails=2C I mean that with 1 producer=2C I have 2500 msg/s. Wh= en I add a second producer=2C I am at 4800 msg/s (Which is roughly twice th= e throughput of a single producer). But when I add a 3rd producer=2C I am a= t 5100 msg/s while I except it to be around 7500 msg/s. So for me the scali= ng stops working when adding a 3rd producer and above. What you both explained to me about the single connection is indeed a plaus= ible candidate because in the tests of "broker only"=2C the throughput of a= single connection is around 3 500 msg/s. So on a single connection=2C I sh= ouldn't go above that figure which is what I am seeing. I imagine that when= I add more producers/consumers=2C the throughput will shrink even more bec= ause the same connection is used by all the producers and the consumers. Do you think it might be an a good idea if the connections were per workerT= hread and not only a single connection?=20 Another solution would be to use a maximum of 3 clients (producer or consum= er) per dispatcher and have a network of interconnected dispatchers but I f= ind it very heavy and hard to maintain and support on the client-side. Do y= ou agree? JMS Producer code ConnectionFactory connectionFactory =3D new JmsConnectionFactory("amqp://ma= chine:port")=3B Connection connection =3D connectionFactory.createConnection()=3B Session session =3D connection.createSession(false=2C Session.AUTO_ACKNOWLE= DGE)=3B Topic topic =3D session.createTopic("perf.topic")=3B messageProducer =3D session.createProducer(topic)=3B messageProducer.send(message)=3B Regards=2C Adel > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Q= pid Java Broker 6.0.0 > To: users@qpid.apache.org > From: tross@redhat.com > Date: Tue=2C 2 Aug 2016 13:42:24 -0400 >=20 >=20 >=20 > On 07/29/2016 08:40 AM=2C Adel Boutros wrote: > > Hello Ted=2C > > > > Increasing the link capacity had no impact. So=2C I have > > done a series of tests to try and isolate the issue. > > We tested 3 different architecture without any consumers: > > Producer --> Broker > > Producer --> Dispatcher > > Producer --> Dispatcher --> Broker > > In every test=2C we sent 100 000 messages which contained a byte array = of 100 bytes. The producers are sending in synchronous mode and with AUTO_A= CKNOWLEDGE. > > > > Our benchmark machines have 20 cores and 396 Gb Ram each. We have > > currently put consumers/producers on 1 machine and dispatcher/brokers o= n another machine. They are both connected with a 10 Gbps ethernet connecti= on. Nothing else is using the machines. > > > > The results are in > > the table below. > > > > What I could observe: > > The broker alone scales well when I add producers > > The dispatcher alone scales well when I add producersThe dispatcher con= nected to a broker scales well with 2 producersThe dispatcher connected to = a broker fails when having 3 producers or more >=20 > In what way does it fail? >=20 > > > > I > > also did some "qdstat -l" while the test was running and at max had 5 > > unsettled deliveries. So I don't think the problem comes with the > > linkCapacity. >=20 > You mentioned that you are running in synchronous mode. Does this mean=20 > that each producer is waiting for settlement on message X before sending= =20 > message X+1? >=20 > > > > What else can we look at? How does the dispatcher connect the producers= to the broker? Does it open a new connection with each new producer? Or do= es it use some sort of a connection pool? >=20 > The router multiplexes the broker traffic over a single connection to=20 > the broker. >=20 > > > > Could the issue come from the capacity configuration of the link in the= connection between the broker and the dispatcher? >=20 > Probably not in your case since the backlogs are much smaller than the=20 > default capacity. >=20 > > > > > > > > > > > > > > > > > > Number of Producers > > Broker > > Dispatcher > > Combined Producer Throughput (msg/s) > > Combined Producer Latency (micros) > > > > > > 1 > > YES > > > > NO > > > > 3 500 > > 370 > > > > > > 4 > > YES > > NO > > > > 9 200 > > 420 > > > > > > 1 > > NO > > YES > > 6 000 > > 180 > > > > > > 2 > > NO > > YES > > 12 000 > > 192 > > > > > > 3 > > NO > > YES > > 16 000 > > 201 > > > > > > 1 > > YES > > YES > > 2 500 > > 360 > > > > > > 2 > > YES > > YES > > 4 800 > > 400 > > > > > > 3 > > YES > > YES > > 5 200 > > 540 > > > > > > qdstat -l > > bash$ qdstat -b dell445srv:10254 -l > > Router Links > > type dir conn id id peer class addr phs = cap undel unsettled deliveries admin oper > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > endpoint in 19 46 mobile perfQueue 1 = 250 0 0 0 enabled up > > endpoint out 19 54 mobile perf.topic 0 = 250 0 2 4994922 enabled up > > endpoint in 27 57 mobile perf.topic 0 = 250 0 1 1678835 enabled up > > endpoint in 28 58 mobile perf.topic 0 = 250 0 1 1677653 enabled up > > endpoint in 29 59 mobile perf.topic 0 = 250 0 0 1638434 enabled up > > endpoint in 47 94 mobile $management 0 = 250 0 0 1 enabled up > > endpoint out 47 95 local temp.2u+DSi+26jT3hvZ 25= 0 0 0 0 enabled up > > > > Regards=2C > > Adel > > > >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 wit= h Qpid Java Broker 6.0.0 > >> To: users@qpid.apache.org > >> From: tross@redhat.com > >> Date: Tue=2C 26 Jul 2016 10:32:29 -0400 > >> > >> Adel=2C > >> > >> That's a good question. I think it's highly dependent on your > >> requirements and the environment. Here are some random thoughts: > >> > >> - There's a trade-off between memory use (message buffering) and > >> throughput. If you have many clients sharing the message bus=2C > >> smaller values of linkCapacity will protect the router memory. If > >> you have relatively few clients wanting to go fast=2C a larger > >> linkCapacity is appropriate. > >> - If the underlying network has high latency (satellite links=2C lon= g > >> distances=2C etc.)=2C larger values of linkCapacity will be needed= to > >> protect against stalling caused by delayed settlement. > >> - The default of 250 is considered a reasonable compromise. I think= a > >> value around 10 is better for a shared bus=2C but 500-1000 might b= e > >> better for throughput with few clients. > >> > >> -Ted > >> > >> > >> On 07/26/2016 10:08 AM=2C Adel Boutros wrote: > >>> Thanks Ted=2C > >>> > >>> I will try to change linkCapacity. However=2C I was wondering if ther= e is a way to "calculate an optimal value for linkCapacity". What factors c= an impact this field? > >>> > >>> Regards=2C > >>> Adel > >>> > >>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 w= ith Qpid Java Broker 6.0.0 > >>>> To: users@qpid.apache.org > >>>> From: tross@redhat.com > >>>> Date: Tue=2C 26 Jul 2016 09:44:43 -0400 > >>>> > >>>> Adel=2C > >>>> > >>>> The number of workers should be related to the number of available > >>>> processor cores=2C not the volume of work or number of connections. = 4 is > >>>> probably a good number for testing. > >>>> > >>>> I'm not sure what the default link credit is for the Java broker (it= 's > >>>> 500 for the c++ broker) or the clients you're using. > >>>> > >>>> The metric you should adjust is the linkCapacity for the listener an= d > >>>> route-container connector. LinkCapacity is the number of deliveries > >>>> that can be in-flight (unsettled) on each link. Qpid Dispatch Route= r > >>>> defaults linkCapacity to 250. Depending on the volumes in your test= =2C > >>>> this might account for the discrepancy. You should try increasing t= his > >>>> value. > >>>> > >>>> Note that linkCapacity is used to set initial credit for your links. > >>>> > >>>> -Ted > >>>> > >>>> On 07/25/2016 12:10 PM=2C Adel Boutros wrote: > >>>>> Hello=2CWe are actually running some performance benchmarks in an a= rchitecture consisting of a Java Broker connected to a Qpid dispatch router= . We also have 3 producers and 3 consumers in the test. The producers send = message to a topic which has a binding on a queue with a filter and the con= sumers receives message from that queue. > >>>>> We have noticed a significant loss of performance in this architect= ure compared to an architecture composed of a simple Java Broker. The throu= ghput of the producers is down to half and there are a lot of oscillations = in the presence of the dispatcher. > >>>>> > >>>>> I have tried to double the number of workers on the dispatcher but = it had no impact. > >>>>> > >>>>> Can you please help us find the cause of this issue? > >>>>> > >>>>> Dispacth router config > >>>>> router { > >>>>> id: router.10454 > >>>>> mode: interior > >>>>> worker-threads: 4 > >>>>> } > >>>>> > >>>>> listener { > >>>>> host: 0.0.0.0 > >>>>> port: 10454 > >>>>> role: normal > >>>>> saslMechanisms: ANONYMOUS > >>>>> requireSsl: no > >>>>> authenticatePeer: no > >>>>> } > >>>>> > >>>>> Java Broker config > >>>>> export QPID_JAVA_MEM=3D"-Xmx16g -Xms2g" > >>>>> 1 Topic + 1 Queue > >>>>> 1 AMQP port without any authentication mechanism (ANONYMOUS) > >>>>> > >>>>> Qdmanage on Dispatcher > >>>>> qdmanage -b amqp://localhost:10454 create --type=3Daddress prefix= =3DperfQueue waypoint=3Dtrue name=3Dperf.queue.addr > >>>>> qdmanage -b amqp://localhost:10454 create --type=3Daddress prefix= =3Dperf.topic waypoint=3Dtrue name=3Dperf.topic.addr > >>>>> qdmanage -b amqp://localhost:10454 create --type=3Dconnector role= =3Droute-container addr=3Dlocalhost port=3D10455 name=3Dlocalhost.broker.10= 455.connector > >>>>> qdmanage -b amqp://localhost:10454 create --type=3DautoLink addr=3D= perfQueue dir=3Din connection=3Dlocalhost.broker.10455.connector name=3Dloc= alhost.broker.10455.perfQueue.in > >>>>> qdmanage -b amqp://localhost:10454 create --type=3DautoLink addr=3D= perf.topic dir=3Dout connection=3Dlocalhost.broker.10455.connector name=3Dl= ocalhost.broker.10455.perf.topic.out > >>>>> > >>>>> Combined producer throughput > >>>>> 1 Broker: http://hpics.li/a9d6efa > >>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b > >>>>> > >>>>> Regards=2C > >>>>> Adel > >>>>> > >>>>> =09 > >>>>> > >>>> > >>>> --------------------------------------------------------------------= - > >>>> To unsubscribe=2C e-mail: users-unsubscribe@qpid.apache.org > >>>> For additional commands=2C e-mail: users-help@qpid.apache.org > >>>> > >>> =09 > >>> > >> > >> --------------------------------------------------------------------- > >> To unsubscribe=2C e-mail: users-unsubscribe@qpid.apache.org > >> For additional commands=2C e-mail: users-help@qpid.apache.org > >> > > =09 > > >=20 > --------------------------------------------------------------------- > To unsubscribe=2C e-mail: users-unsubscribe@qpid.apache.org > For additional commands=2C e-mail: users-help@qpid.apache.org >=20 = --_ebed8018-8ff1-4a5c-a55a-750845a76a37_--