qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Armstrong <jarmstr...@avvasi.com>
Subject RE: qpidd using approx 10x memory
Date Fri, 09 Sep 2011 15:40:57 GMT
The sample code I provided only publishes messages - there is no consuming going on, so this
doesn't follow the pattern you are suggesting. If there is no freeing happening because there
is no consumer, it seems like there must be some other cause of this.
Also, in my application, there are two connections publishing and 2+ connections consuming
and this shows the same problem.

________________________________________
From: Kim van der Riet [kim.vdriet@redhat.com]
Sent: Friday, September 09, 2011 7:15 AM
To: users@qpid.apache.org
Subject: RE: qpidd using approx 10x memory

On Thu, 2011-09-08 at 22:14 -0400, Ohme, Gregory (GE Healthcare) wrote:
> The pooling is a feature of glibc's malloc and the pools are referred to
> as arenas. Arenas exist to ease thread contention in threaded programs.
> The down size is that memory fragmentation can increase with the number
> of arenas that get created. In a heavily threaded program the number of
> arenas will typically equal the number of hardware cores that are
> available in a system. You can disable the arenas in newer versions of
> glibc via malloc options but it depends on if the distro supports the
> behavior since it is an experimental flag for malloc.  Also note that
> malloc doesn't want to give memory back to the system right away you can
> control this behavior via malloc options as well.
>
> http://www.gnu.org/s/libc/manual/html_node/Malloc-Tunable-Parameters.htm
> l#Malloc-Tunable-Parameters
>
> Here are some glibc bug reports on the matter
>
> http://sourceware.org/bugzilla/show_bug.cgi?id=11044
> http://sourceware.org/bugzilla/show_bug.cgi?id=11261
>
> Another downsize to note about arena's are that they only allocate using
> memmap and never using the more_core hook. This breaks libraries that
> implement the more_core hook, such as libhugetlbfs..
>
> Regards,
> Greg
>
I can confirm that this problem exists for patterns where one connection
on the broker is responsible for publishing and another for consuming.
If I understand correctly, the broker assigns worker threads to
connections to limit the locking on sockets, but the result of this is
that one thread is constantly mallocing memory while another is freeing
it. This results in the pooling of freed memory on a thread which does
not reuse it.

In Fedora/RHEL, this behavior can be controlled by environment variables
MALLOC_ARENA_TEST and MALLOC_ARENA_MAX. Setting MALLOC_ARENA_TEST to 0
and MALLOC_ARENA_MAX to 1 should revert the malloc/free behavior to
something similar to before this was introduced. I have googled these
and almost all refs come back to RHEL/Fedora and their derivatives so I
am uncertain to what extent they are supported outside these distros,
but it is worth a try. Otherwise, as is suggested by Ulrich Drepper in
the bugzilla links above, use mallopt().

I have confirmed that using these settings with a broker usage pattern
of repeatedly publishing on one connection then consuming on another
large blocks of messages (which has the effect of driving the free
memory down to zero and into swapping) permanently averted this problem.

Clearly using these parameters should be considered as tuning - there is
no solution which would fit all situations. If you have a usage pattern
in which connections consume as much as they publish, then this
situation would not occur and would probably realize a small performance
gain from the per-thread free memory pooling.


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org


Mime
View raw message