qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robbie Gemmell <robbie.gemm...@gmail.com>
Subject Re: What is the memory footprint of an Apache qpid queue?
Date Mon, 17 Oct 2011 21:35:21 GMT
The individual queues have a very low memory overhead and dont have
any byte[] buffers that i can recall so those are either for network
data or more probably the session command buffers. I did fix a few
issue after 0.12 where those could be retained unecessarily, however
it depends on what you are doing whether that will be of any help, and
the fact you see such a signficant difference when using Derby vs
Memory store suggests it isnt just those at work.

Could you please post the code you are using for your testing so we
can try to replicate the scenario precisely.

Thanks,
Robbie

On 14 October 2011 19:48, Praveen M <lefthandmagic@gmail.com> wrote:
> Actually my bad.
>
> I went back and ran the test with MemoryMessageStore, and realized that
> there is a big jump in memory usage when the messages get enqueued and are
> processed. I realized that I wasn't running out of memory in that case
> (using MemoryMessageStore) like that of using a DerbyMessageStore as the
> broker with derby was taking up more memory(to keep the Queue state i
> suppose).
>
> That said,
>
> Can someone please tell me what is the memory foot print of an individual
> queue that was created.
>
> What is the max number of queues that you've created on Qpid? and how much
> memory on the broker side will you say that each queue take?
>
> I'm bench marking my tests against  qpid client/broker 0.12
>
> Also, Can someone please let me know if there are any tweaks which will
> reduce the queue buffer size? I did a heap dump and saw that a lot of my
> memory was allocated to a byte[] buffer, which i assume is the queue's
> buffer..Does anyone know what is the default buffer size? can i change that?
>
> I use m-Damqj.read_write_pool_size=32 -Dmax_prefetch=1
>
> My use case requires the operation of about 20K *persistant* queues in
> parallel, and I'd like to see reasonable memory usage if all the queues have
> messages to consume. I'm willing to compromise on the throughput if I can
> save more heap.
>
> Thanks a lot,
> Praveen
>
>
>
> On Wed, Oct 12, 2011 at 9:22 AM, Robbie Gemmell <robbie.gemmell@gmail.com>wrote:
>
>> What version of the client/broker were you using in your test? Can you
>> send a copy of the code you used to reproduce the issue (it will
>> probably get scraped by the mailing list if you attach it, so just
>> paste it in).
>>
>> Regards,
>> Robbie
>>
>> On 12 October 2011 06:21, Praveen M <lefthandmagic@gmail.com> wrote:
>> > Thanks for your email. I have earlier run the benchmark with
>> > MemoryMessageStore.
>> >
>> > I was able to go up to 50K queues with the exact same test (a message per
>> > queue), and it took up merely 2.1 GB.
>> >
>> > So this seems to be something when I switch to DerbyMessageStore.
>> >
>> > Maybe I am missing some setting?
>> > Or is Derby supposed to perform this way?
>> >
>> > Thanks for your help.
>> >
>> > Thanks,
>> > Praveen
>> >
>> > On Tue, Oct 11, 2011 at 8:44 PM, Danushka Menikkumbura <
>> > danushka.menikkumbura@gmail.com> wrote:
>> >
>> >> Hi Praveen,
>> >>
>> >> Do you notice the same behavior even when you run the broker without
>> Derby
>> >> message store?. AFAIK this has nothing to do with the persistence
>> storage
>> >> you use.
>> >>
>> >> Thanks,
>> >> Danushka
>> >>
>> >> On Wed, Oct 12, 2011 at 4:14 AM, Praveen M <lefthandmagic@gmail.com>
>> >> wrote:
>> >>
>> >> > Hi,
>> >> >
>> >> > I'm an apache qpid newbie and am trying to benchmark Qpid Java Broker
>> to
>> >> > see
>> >> > if it could be use for one of my usecase.
>> >> >
>> >> > My UseCase requires the ability to create atleast 20K persistent
>> queues
>> >> and
>> >> > have them all running in parallel.
>> >> >
>> >> > I am using the DerbyMessageStore as I understand that the default
>> >> > MemoryMessageQueue is not persistant across broker restarts.
>> >> >
>> >> > I'm running the broker with a heap of 4GB and options QPID_OPTS set
to
>> >> > -Damqj.read_write_pool_size=32 -Dmax_prefetch=1
>> >> >
>> >> >
>> >> > My test does the following:
>> >> >
>> >> > 1) Creates a queue and registers a listener on that queue. I do this
>> upto
>> >> > 20K times for 20K distinct queues. I create the queues with the
>> following
>> >> > option
>> >> >    create: always , node : {type : queue, durable : true}}
>> >> >    - this step goes quite fine.  I was monitoring the memory usage
>> during
>> >> > this step and it almost always stayed stable around 500-800MB
>> >> > 2) I produce messages for the queues (one message for each queues)
and
>> >> the
>> >> > messages are consumed by the registered handlers in step 1.
>> >> >    - When this step starts, the memory usage just shoots up and
>> exhausts
>> >> my
>> >> > 4GB memory all together.
>> >> >
>> >> >
>> >> > Can someone please help me explaining why I am seeing this kind of
a
>> >> > behavior?
>> >> >
>> >> > Also, Can you please point out if I'm missing some setting or doing
>> >> > something completely wrong/stupid?
>> >> >
>> >> >
>> >> > Thanks,
>> >> > --
>> >> > -Praveen
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > -Praveen
>> >
>>
>> ---------------------------------------------------------------------
>> Apache Qpid - AMQP Messaging Implementation
>> Project:      http://qpid.apache.org
>> Use/Interact: mailto:users-subscribe@qpid.apache.org
>>
>>
>
>
> --
> -Praveen
>

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Mime
View raw message