qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adel Boutros <adelbout...@live.com>
Subject RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
Date Tue, 02 Aug 2016 17:29:39 GMT
Hello Ted,
Were you able to check the below? Can it be some other resource is being congested in the
code such as the mutex mechanism or the IO?
Regards,Adel

> From: adelboutros@live.com
> To: users@qpid.apache.org
> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker
6.0.0
> Date: Fri, 29 Jul 2016 14:45:48 +0200
> 
> Here is an image representation of the badly formatted table: http://imgur.com/a/EuWch

> > From: adelboutros@live.com
> > To: users@qpid.apache.org
> > Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java
Broker 6.0.0
> > Date: Fri, 29 Jul 2016 14:40:10 +0200
> > 
> > Hello Ted,
> > 
> > Increasing the link capacity had no impact. So, I have
> >  done a series of tests to try and isolate the issue. 
> > We tested 3 different architecture without any consumers:
> > Producer --> Broker
> > Producer --> Dispatcher
> > Producer --> Dispatcher --> Broker
> > In every test, we sent 100 000 messages which contained a byte array of 100 bytes.
The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
> > 
> > Our benchmark machines have 20 cores and 396 Gb Ram each. We have
> > currently put consumers/producers on 1 machine and dispatcher/brokers on another
machine. They are both connected with a 10 Gbps ethernet connection. Nothing else is using
the machines.
> > 
> > The results are in
> >  the table below.
> > 
> > What I could observe:
> > The broker alone scales well when I add producers
> > The dispatcher alone scales well when I add producersThe dispatcher connected to
a broker scales well with 2 producersThe dispatcher connected to a broker fails when having
3 producers or more
> > 
> > I
> >  also did some "qdstat -l" while the test was running and at max had 5 
> > unsettled deliveries. So I don't think the problem comes with the 
> > linkCapacity.
> > 
> > What else can we look at? How does the dispatcher connect the producers to the broker?
Does it open a new connection with each new producer? Or does it use some sort of a connection
pool?
> > 
> > Could the issue come from the capacity configuration of the link in the connection
between the broker and the dispatcher?
> > 
> > 
> >  
> >  
> >  
> >  
> >  
> >  
> >   Number of Producers
> >   Broker
> >   Dispatcher
> >   Combined Producer Throughput (msg/s)
> >   Combined Producer Latency (micros)
> >  
> >  
> >   1
> >   YES
> > 
> >   NO
> > 
> >   3 500
> >   370
> >  
> >  
> >   4
> >   YES
> >   NO
> > 
> >   9 200
> >   420
> >  
> >  
> >   1
> >   NO
> >   YES
> >   6 000
> >   180
> >  
> >  
> >   2
> >   NO
> >   YES
> >   12 000
> >   192
> >  
> >  
> >   3
> >   NO
> >   YES
> >   16 000
> >   201
> >  
> >  
> >   1
> >   YES
> >   YES
> >   2 500
> >   360
> >  
> >  
> >   2
> >   YES
> >   YES
> >   4 800
> >   400
> >  
> >  
> >   3
> >   YES
> >   YES
> >   5 200
> >   540
> >  
> > 
> > qdstat -l
> > bash$ qdstat -b dell445srv:10254 -l
> > Router Links
> >   type      dir  conn id  id  peer  class   addr                  phs  cap  undel
 unsettled  deliveries  admin    oper
> >   =======================================================================================================================
> >   endpoint  in   19       46        mobile  perfQueue             1    250  0  
   0          0           enabled  up
> >   endpoint  out  19       54        mobile  perf.topic              0    250  0
     2          4994922     enabled  up
> >   endpoint  in   27       57        mobile  perf.topic               0    250  0
     1          1678835     enabled  up
> >   endpoint  in   28       58        mobile  perf.topic               0    250  0
     1          1677653     enabled  up
> >   endpoint  in   29       59        mobile  perf.topic               0    250  0
     0          1638434     enabled  up
> >   endpoint  in   47       94        mobile  $management           0    250  0  
   0          1           enabled  up
> >   endpoint  out 47      95        local   temp.2u+DSi+26jT3hvZ       250  0    
 0          0           enabled  up
> > 
> > Regards,
> > Adel
> > 
> > > Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
> > > To: users@qpid.apache.org
> > > From: tross@redhat.com
> > > Date: Tue, 26 Jul 2016 10:32:29 -0400
> > > 
> > > Adel,
> > > 
> > > That's a good question.  I think it's highly dependent on your 
> > > requirements and the environment.  Here are some random thoughts:
> > > 
> > >   - There's a trade-off between memory use (message buffering) and
> > >     throughput.  If you have many clients sharing the message bus,
> > >     smaller values of linkCapacity will protect the router memory.  If
> > >     you have relatively few clients wanting to go fast, a larger
> > >     linkCapacity is appropriate.
> > >   - If the underlying network has high latency (satellite links, long
> > >     distances, etc.), larger values of linkCapacity will be needed to
> > >     protect against stalling caused by delayed settlement.
> > >   - The default of 250 is considered a reasonable compromise.  I think a
> > >     value around 10 is better for a shared bus, but 500-1000 might be
> > >     better for throughput with few clients.
> > > 
> > > -Ted
> > > 
> > > 
> > > On 07/26/2016 10:08 AM, Adel Boutros wrote:
> > > > Thanks Ted,
> > > >
> > > > I will try to change linkCapacity. However, I was wondering if there is
a way to "calculate an optimal value for linkCapacity". What factors can impact this field?
> > > >
> > > > Regards,
> > > > Adel
> > > >
> > > >> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0
with Qpid Java Broker 6.0.0
> > > >> To: users@qpid.apache.org
> > > >> From: tross@redhat.com
> > > >> Date: Tue, 26 Jul 2016 09:44:43 -0400
> > > >>
> > > >> Adel,
> > > >>
> > > >> The number of workers should be related to the number of available
> > > >> processor cores, not the volume of work or number of connections.
 4 is
> > > >> probably a good number for testing.
> > > >>
> > > >> I'm not sure what the default link credit is for the Java broker (it's
> > > >> 500 for the c++ broker) or the clients you're using.
> > > >>
> > > >> The metric you should adjust is the linkCapacity for the listener
and
> > > >> route-container connector.  LinkCapacity is the number of deliveries
> > > >> that can be in-flight (unsettled) on each link.  Qpid Dispatch Router
> > > >> defaults linkCapacity to 250.  Depending on the volumes in your test,
> > > >> this might account for the discrepancy.  You should try increasing
this
> > > >> value.
> > > >>
> > > >> Note that linkCapacity is used to set initial credit for your links.
> > > >>
> > > >> -Ted
> > > >>
> > > >> On 07/25/2016 12:10 PM, Adel Boutros wrote:
> > > >>> Hello,We are actually running some performance benchmarks in an
architecture consisting of a Java Broker connected to a Qpid dispatch router. We also have
3 producers and 3 consumers in the test. The producers send message to a topic which has a
binding on a queue with a filter and the consumers receives message from that queue.
> > > >>> We have noticed a significant loss of performance in this architecture
compared to an architecture composed of a simple Java Broker. The throughput of the producers
is down to half and there are a lot of oscillations in the presence of the dispatcher.
> > > >>>
> > > >>> I have tried to double the number of workers on the dispatcher
but it had no impact.
> > > >>>
> > > >>> Can you please help us find the cause of this issue?
> > > >>>
> > > >>> Dispacth router config
> > > >>> router {
> > > >>>     id: router.10454
> > > >>>     mode: interior
> > > >>>     worker-threads: 4
> > > >>> }
> > > >>>
> > > >>> listener {
> > > >>>     host: 0.0.0.0
> > > >>>     port: 10454
> > > >>>     role: normal
> > > >>>     saslMechanisms: ANONYMOUS
> > > >>>     requireSsl: no
> > > >>>     authenticatePeer: no
> > > >>> }
> > > >>>
> > > >>> Java Broker config
> > > >>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
> > > >>> 1 Topic + 1 Queue
> > > >>> 1 AMQP port without any authentication mechanism (ANONYMOUS)
> > > >>>
> > > >>> Qdmanage on Dispatcher
> > > >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perfQueue
waypoint=true name=perf.queue.addr
> > > >>> qdmanage -b amqp://localhost:10454 create --type=address prefix=perf.topic
waypoint=true name=perf.topic.addr
> > > >>> qdmanage -b amqp://localhost:10454 create --type=connector role=route-container
addr=localhost port=10455 name=localhost.broker.10455.connector
> > > >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perfQueue
dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
> > > >>> qdmanage -b amqp://localhost:10454 create --type=autoLink addr=perf.topic
dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
> > > >>>
> > > >>> Combined producer throughput
> > > >>> 1 Broker: http://hpics.li/a9d6efa
> > > >>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
> > > >>>
> > > >>> Regards,
> > > >>> Adel
> > > >>>
> > > >>>  		 	   		
> > > >>>
> > > >>
> > > >> ---------------------------------------------------------------------
> > > >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > >> For additional commands, e-mail: users-help@qpid.apache.org
> > > >>
> > > >  		 	   		
> > > >
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > > 
> >  		 	   		  
>  		 	   		  
 		 	   		  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message