qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From CLIVE <cl...@ckjltd.co.uk>
Subject Re: QPID performance on virtual machines
Date Wed, 02 May 2012 21:28:16 GMT

I thought about this as well. So re-started the broker on the physical 
Dell R710 with the threads option set to just 4 and saw the same 
throughput values (85000 publish and 80000 subscribe). As reducing the 
threads count didn't seem to have much effect on the physical machine I 
thought that this probably wasn't the issue.

As the qpid-perftest application was only creating 1 producer and 1 
consumer I reasoned that perhaps the broker was only using two threads 
too service the read and writes from these clients. This was why 
reducing the thread count on the broker had no effect. Would you expect 
the broker to use more than two threads to service the clients for this 

I will rerun the test tomorrow based on an increased number of CPU's in 
the VM(s) just to double check whether it is a number of cores issue.

I did run 'strace -c' on qpidd while the test was running to count the 
number of system calls and I noted the big hitters were futex and write. 
Interestingly the reads read in 64K chunks, but the writes were only 
2048 bytes at a time. As a result the number writes occurring were an 
order of magnitude bigger than the reads; I left the detailed results at 
work so apologies for not quoting the actual figures.


On 02/05/2012 20:23, Steve Huston wrote:
> The qpid broker learns how many CPUs are available and will run more I/O
> threads when more CPUs are available (#CPUs + 1 threads). It would be
> interesting to see the results if your VM gets more CPUs.
> -Steve
>> -----Original Message-----
>> From: CLIVE [mailto:clive@ckjltd.co.uk]
>> Sent: Wednesday, May 02, 2012 1:30 PM
>> To: James Kirkland
>> Cc: users@qpid.apache.org
>> Subject: Re: QPID performance on virtual machines
>> James,
>> qpid-perf-test (as supplied with the qpid-0.14 source tar ball) runs a
> direct
>> queue test when executed without any parameters; there is a command line
>> option that enables this to be be changed if required.  The message size
> is
>> 1024K (again default size when not explicitly set). And
>> 500000 messages are published by the test (again the default when not
>> explicitly set). All messages are transient so I wouldn't expect any
> file I/O
>> overhead to interfere with the test and this is confirmed by the vmstat
>> results I am seeing. The only jump in the vmstat output is the number of
>> context switches that are occurring which jumps up into the thousands.
>> Clive
>> On 02/05/2012 18:10, James Kirkland wrote:
>>> What sort of messging scenario is it?  Are the messages persisted?
>>> How big are they?  If they are persisted are you using virtual disks
>>> or physical devices?
>>> CLIVE wrote:
>>>> Hi all,
>>>> I have been undertaking some performance profiling of QPID version
>>>> 0.14 over the last few weeks and I have found a significant
>>>> performance drop off when running QPID in a virtual machine.
>>>> As an example if I run qpidd on an 8 core DELL R710 with 36G RAM
>>>> (RHEL5u5) and then run qpid-perf-test (on the same machine to
>>>> discount any network problems) without any command line parameters I
>>>> am seeing about 85,000 publish transfers/sec and 80000 consume
>>>> transfers/sec. If I run the same scenario on a VM (tried both KVM and
>>>> VMWare ESXi 4.3 running RHEL5u5) with 2 cores and 8G RAM, I am seeing
>>>> only 45000 publish transfers/sec and 40000 consume transfers/sec. A
>>>> significant drop off in performance. Looking at the cpu and memory
>>>> usage these would not seem to be the limiting factors as the memory
>>>> consumption of qpidd stays under 200 MBytes and its CPU is up at
>>>> about 150%; hence the two core machine.
>>>> I have even run the same test on my Mac Book at home using VMWare
>>>> Fusion 4 ( 2 Core 4G RAM) and see the same 45000/40000 transfers/sec
>>>> results.
>>>> I would expect a small drop off in performance when running in a VM,
>>>> but not to the extent that I am seeing.
>>>> Has anyone else seen this and if so were they able to get to the
>>>> bottom of the issue.
>>>> Any help would be appreciated.
>>>> Clive Lilley
>>> --
>>> James Kirkland
>>> Principal Enterprise Solutions Architect
>>> 3340 Peachtree Road, NE,
>>> Suite 1200
>>> Atlanta, GA 30326 USA.
>>> Phone (404) 254-6457<https://www.google.com/voice#phones>
>>> RHCE Certificate: 805009616436562
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
> .

To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org

View raw message