qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Ross <tr...@redhat.com>
Subject Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java Broker 6.0.0
Date Tue, 02 Aug 2016 19:18:40 GMT
Since this is synchronous and durable, I would expect the store to be 
the bottleneck in these cases and that for rates of ~7.5K, the router 
shouldn't be a factor.  The only reason I can see for the router to 
affect throughput would be by introduced latency.  Of course, it's 
possible that there's a defect we need to fix.

-Ted

On 08/02/2016 03:12 PM, Adel Boutros wrote:
> I forgot to add we use durable queues and the persistence is set to DEFAULT.
>
>> From: adelboutros@live.com
>> To: users@qpid.apache.org
>> Subject: RE: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid Java
Broker 6.0.0
>> Date: Tue, 2 Aug 2016 21:10:35 +0200
>>
>> We are using Qpid Java Broker 6.0.1 with Berkley DB as message store. Were you using
asynchronous sending when you got 80K? Because I think with asynchronous sending, we can reach
higher speeds.We actually timestamp right before and after the call to the "send" method.
If we use asynchronous sending, the timestamping will be wrong as it doesn't account the settlement.
>> I will try tomorrow the multiple connectors and let you know how it goes. Do you
want me to test asynchronous sending as well?
>> Regards,Adel
>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with Qpid
Java Broker 6.0.0
>>> To: users@qpid.apache.org
>>> From: tross@redhat.com
>>> Date: Tue, 2 Aug 2016 14:44:22 -0400
>>>
>>>
>>>
>>> On 08/02/2016 02:10 PM, Adel Boutros wrote:
>>>> Hello Ted, Gordon,
>>>>
>>>> When I say the JMS producers are sending synchronously, I mean they don't
set any options to the connection URL such as jms.forceAsyncSend. So I guess this means the
producer will wait for the settlement before sending message X + 1.
>>>>
>>>> When I say it fails, I mean that with 1 producer, I have 2500 msg/s. When
I add a second producer, I am at 4800 msg/s (Which is roughly twice the throughput of a single
producer). But when I add a 3rd producer, I am at 5100 msg/s while I except it to be around
7500 msg/s. So for me the scaling stops working when adding a 3rd producer and above.
>>>
>>> Understood.
>>>
>>>>
>>>> What you both explained to me about the single connection is indeed a plausible
candidate because in the tests of "broker only", the throughput of a single connection is
around 3 500 msg/s. So on a single connection, I shouldn't go above that figure which is what
I am seeing. I imagine that when I add more producers/consumers, the throughput will shrink
even more because the same connection is used by all the producers and the consumers.
>>>>
>>>> Do you think it might be an a good idea if the connections were per workerThread
and not only a single connection?
>>>
>>> I think this is an interesting feature to consider, however 5.1K
>>> messages per second on a connection seems like a really low limit to me.
>>>   As I recall, we were able to get closer to 80K to 100K per connection
>>> on qpidd.  Which broker are you using?
>>>
>>> An interesting experiment would be to configure two connectors to the
>>> same broker (with different names) and configure autoLinks with
>>> different addresses to the two connectors.  This would show if the
>>> bottleneck is the router-to-broker connection.
>>>
>>>>
>>>> Another solution would be to use a maximum of 3 clients (producer or consumer)
per dispatcher and have a network of interconnected dispatchers but I find it very heavy and
hard to maintain and support on the client-side. Do you agree?
>>>
>>> I don't think this would solve your problem anyway.
>>>
>>>>
>>>> JMS Producer code
>>>> ConnectionFactory connectionFactory = new JmsConnectionFactory("amqp://machine:port");
>>>> Connection connection = connectionFactory.createConnection();
>>>> Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
>>>> Topic topic = session.createTopic("perf.topic");
>>>> messageProducer = session.createProducer(topic);
>>>> messageProducer.send(message);
>>>>
>>>> Regards,
>>>> Adel
>>>>
>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router 0.6.0 with
Qpid Java Broker 6.0.0
>>>>> To: users@qpid.apache.org
>>>>> From: tross@redhat.com
>>>>> Date: Tue, 2 Aug 2016 13:42:24 -0400
>>>>>
>>>>>
>>>>>
>>>>> On 07/29/2016 08:40 AM, Adel Boutros wrote:
>>>>>> Hello Ted,
>>>>>>
>>>>>> Increasing the link capacity had no impact. So, I have
>>>>>>  done a series of tests to try and isolate the issue.
>>>>>> We tested 3 different architecture without any consumers:
>>>>>> Producer --> Broker
>>>>>> Producer --> Dispatcher
>>>>>> Producer --> Dispatcher --> Broker
>>>>>> In every test, we sent 100 000 messages which contained a byte array
of 100 bytes. The producers are sending in synchronous mode and with AUTO_ACKNOWLEDGE.
>>>>>>
>>>>>> Our benchmark machines have 20 cores and 396 Gb Ram each. We have
>>>>>> currently put consumers/producers on 1 machine and dispatcher/brokers
on another machine. They are both connected with a 10 Gbps ethernet connection. Nothing else
is using the machines.
>>>>>>
>>>>>> The results are in
>>>>>>  the table below.
>>>>>>
>>>>>> What I could observe:
>>>>>> The broker alone scales well when I add producers
>>>>>> The dispatcher alone scales well when I add producersThe dispatcher
connected to a broker scales well with 2 producersThe dispatcher connected to a broker fails
when having 3 producers or more
>>>>>
>>>>> In what way does it fail?
>>>>>
>>>>>>
>>>>>> I
>>>>>>  also did some "qdstat -l" while the test was running and at max
had 5
>>>>>> unsettled deliveries. So I don't think the problem comes with the
>>>>>> linkCapacity.
>>>>>
>>>>> You mentioned that you are running in synchronous mode.  Does this mean
>>>>> that each producer is waiting for settlement on message X before sending
>>>>> message X+1?
>>>>>
>>>>>>
>>>>>> What else can we look at? How does the dispatcher connect the producers
to the broker? Does it open a new connection with each new producer? Or does it use some sort
of a connection pool?
>>>>>
>>>>> The router multiplexes the broker traffic over a single connection to
>>>>> the broker.
>>>>>
>>>>>>
>>>>>> Could the issue come from the capacity configuration of the link
in the connection between the broker and the dispatcher?
>>>>>
>>>>> Probably not in your case since the backlogs are much smaller than the
>>>>> default capacity.
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>   Number of Producers
>>>>>>   Broker
>>>>>>   Dispatcher
>>>>>>   Combined Producer Throughput (msg/s)
>>>>>>   Combined Producer Latency (micros)
>>>>>>
>>>>>>
>>>>>>   1
>>>>>>   YES
>>>>>>
>>>>>>   NO
>>>>>>
>>>>>>   3 500
>>>>>>   370
>>>>>>
>>>>>>
>>>>>>   4
>>>>>>   YES
>>>>>>   NO
>>>>>>
>>>>>>   9 200
>>>>>>   420
>>>>>>
>>>>>>
>>>>>>   1
>>>>>>   NO
>>>>>>   YES
>>>>>>   6 000
>>>>>>   180
>>>>>>
>>>>>>
>>>>>>   2
>>>>>>   NO
>>>>>>   YES
>>>>>>   12 000
>>>>>>   192
>>>>>>
>>>>>>
>>>>>>   3
>>>>>>   NO
>>>>>>   YES
>>>>>>   16 000
>>>>>>   201
>>>>>>
>>>>>>
>>>>>>   1
>>>>>>   YES
>>>>>>   YES
>>>>>>   2 500
>>>>>>   360
>>>>>>
>>>>>>
>>>>>>   2
>>>>>>   YES
>>>>>>   YES
>>>>>>   4 800
>>>>>>   400
>>>>>>
>>>>>>
>>>>>>   3
>>>>>>   YES
>>>>>>   YES
>>>>>>   5 200
>>>>>>   540
>>>>>>
>>>>>>
>>>>>> qdstat -l
>>>>>> bash$ qdstat -b dell445srv:10254 -l
>>>>>> Router Links
>>>>>>   type      dir  conn id  id  peer  class   addr                
 phs  cap  undel  unsettled  deliveries  admin    oper
>>>>>>   =======================================================================================================================
>>>>>>   endpoint  in   19       46        mobile  perfQueue           
 1    250  0      0          0           enabled  up
>>>>>>   endpoint  out  19       54        mobile  perf.topic          
   0    250  0      2          4994922     enabled  up
>>>>>>   endpoint  in   27       57        mobile  perf.topic          
    0    250  0      1          1678835     enabled  up
>>>>>>   endpoint  in   28       58        mobile  perf.topic          
    0    250  0      1          1677653     enabled  up
>>>>>>   endpoint  in   29       59        mobile  perf.topic          
    0    250  0      0          1638434     enabled  up
>>>>>>   endpoint  in   47       94        mobile  $management         
 0    250  0      0          1           enabled  up
>>>>>>   endpoint  out 47      95        local   temp.2u+DSi+26jT3hvZ  
    250  0      0          0           enabled  up
>>>>>>
>>>>>> Regards,
>>>>>> Adel
>>>>>>
>>>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch router
0.6.0 with Qpid Java Broker 6.0.0
>>>>>>> To: users@qpid.apache.org
>>>>>>> From: tross@redhat.com
>>>>>>> Date: Tue, 26 Jul 2016 10:32:29 -0400
>>>>>>>
>>>>>>> Adel,
>>>>>>>
>>>>>>> That's a good question.  I think it's highly dependent on your
>>>>>>> requirements and the environment.  Here are some random thoughts:
>>>>>>>
>>>>>>>   - There's a trade-off between memory use (message buffering)
and
>>>>>>>     throughput.  If you have many clients sharing the message
bus,
>>>>>>>     smaller values of linkCapacity will protect the router memory.
 If
>>>>>>>     you have relatively few clients wanting to go fast, a larger
>>>>>>>     linkCapacity is appropriate.
>>>>>>>   - If the underlying network has high latency (satellite links,
long
>>>>>>>     distances, etc.), larger values of linkCapacity will be needed
to
>>>>>>>     protect against stalling caused by delayed settlement.
>>>>>>>   - The default of 250 is considered a reasonable compromise.
 I think a
>>>>>>>     value around 10 is better for a shared bus, but 500-1000
might be
>>>>>>>     better for throughput with few clients.
>>>>>>>
>>>>>>> -Ted
>>>>>>>
>>>>>>>
>>>>>>> On 07/26/2016 10:08 AM, Adel Boutros wrote:
>>>>>>>> Thanks Ted,
>>>>>>>>
>>>>>>>> I will try to change linkCapacity. However, I was wondering
if there is a way to "calculate an optimal value for linkCapacity". What factors can impact
this field?
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Adel
>>>>>>>>
>>>>>>>>> Subject: Re: [Performance] Benchmarking Qpid dispatch
router 0.6.0 with Qpid Java Broker 6.0.0
>>>>>>>>> To: users@qpid.apache.org
>>>>>>>>> From: tross@redhat.com
>>>>>>>>> Date: Tue, 26 Jul 2016 09:44:43 -0400
>>>>>>>>>
>>>>>>>>> Adel,
>>>>>>>>>
>>>>>>>>> The number of workers should be related to the number
of available
>>>>>>>>> processor cores, not the volume of work or number of
connections.  4 is
>>>>>>>>> probably a good number for testing.
>>>>>>>>>
>>>>>>>>> I'm not sure what the default link credit is for the
Java broker (it's
>>>>>>>>> 500 for the c++ broker) or the clients you're using.
>>>>>>>>>
>>>>>>>>> The metric you should adjust is the linkCapacity for
the listener and
>>>>>>>>> route-container connector.  LinkCapacity is the number
of deliveries
>>>>>>>>> that can be in-flight (unsettled) on each link.  Qpid
Dispatch Router
>>>>>>>>> defaults linkCapacity to 250.  Depending on the volumes
in your test,
>>>>>>>>> this might account for the discrepancy.  You should try
increasing this
>>>>>>>>> value.
>>>>>>>>>
>>>>>>>>> Note that linkCapacity is used to set initial credit
for your links.
>>>>>>>>>
>>>>>>>>> -Ted
>>>>>>>>>
>>>>>>>>> On 07/25/2016 12:10 PM, Adel Boutros wrote:
>>>>>>>>>> Hello,We are actually running some performance benchmarks
in an architecture consisting of a Java Broker connected to a Qpid dispatch router. We also
have 3 producers and 3 consumers in the test. The producers send message to a topic which
has a binding on a queue with a filter and the consumers receives message from that queue.
>>>>>>>>>> We have noticed a significant loss of performance
in this architecture compared to an architecture composed of a simple Java Broker. The throughput
of the producers is down to half and there are a lot of oscillations in the presence of the
dispatcher.
>>>>>>>>>>
>>>>>>>>>> I have tried to double the number of workers on the
dispatcher but it had no impact.
>>>>>>>>>>
>>>>>>>>>> Can you please help us find the cause of this issue?
>>>>>>>>>>
>>>>>>>>>> Dispacth router config
>>>>>>>>>> router {
>>>>>>>>>>     id: router.10454
>>>>>>>>>>     mode: interior
>>>>>>>>>>     worker-threads: 4
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> listener {
>>>>>>>>>>     host: 0.0.0.0
>>>>>>>>>>     port: 10454
>>>>>>>>>>     role: normal
>>>>>>>>>>     saslMechanisms: ANONYMOUS
>>>>>>>>>>     requireSsl: no
>>>>>>>>>>     authenticatePeer: no
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> Java Broker config
>>>>>>>>>> export QPID_JAVA_MEM="-Xmx16g -Xms2g"
>>>>>>>>>> 1 Topic + 1 Queue
>>>>>>>>>> 1 AMQP port without any authentication mechanism
(ANONYMOUS)
>>>>>>>>>>
>>>>>>>>>> Qdmanage on Dispatcher
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=address
prefix=perfQueue waypoint=true name=perf.queue.addr
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=address
prefix=perf.topic waypoint=true name=perf.topic.addr
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=connector
role=route-container addr=localhost port=10455 name=localhost.broker.10455.connector
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink
addr=perfQueue dir=in connection=localhost.broker.10455.connector name=localhost.broker.10455.perfQueue.in
>>>>>>>>>> qdmanage -b amqp://localhost:10454 create --type=autoLink
addr=perf.topic dir=out connection=localhost.broker.10455.connector name=localhost.broker.10455.perf.topic.out
>>>>>>>>>>
>>>>>>>>>> Combined producer throughput
>>>>>>>>>> 1 Broker: http://hpics.li/a9d6efa
>>>>>>>>>> 1 Broker + 1 Dispatcher: http://hpics.li/189299b
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Adel
>>>>>>>>>>
>>>>>>>>>>  		 	   		
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>>>>>
>>>>>>>>  		 	   		
>>>>>>>>
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>>>
>>>>>>  		 	   		
>>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>>>
>>>>  		 	   		
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>  		 	   		
>  		 	   		
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Mime
View raw message