qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From CLIVE <cl...@ckjltd.co.uk>
Subject Re: QPID C++ Broker MALLOC_ARENA_MAX
Date Tue, 28 Mar 2017 18:24:19 GMT
Ted,

Still seeing large memory usage on the broker, currently around 39GBytes.

I think we are seeing high levels of memory fragmentation, initially 
made worse by the glibc malloc arena
implementation, which has been disabled with the settings I made on Friday.

The current message sizes range from around 1K-5M which are are being 
continuously published and consumed.

I'll probably try using some of the malloc tuning parameters to turn off 
the normal dynamic mmap usage
by using MALLOC_MMAP_THRESHOLD to fix the threshold to something like 
64K-128K. This should cause malloc
to use mmap to memory map memory in to the application with will then 
be  definitely released after use.

If that doesn't work I will probably have to start looking at jemalloc 
or Googles tcmalloc.
Does anyone have experience of using different memory allocators with QPID?

Clive

On 27/03/2017 22:12, CLIVE wrote:
> Ted,
>
> Thanks for the response.
>
> I've taken on board the solution proposed in the post you directed me at.
>
> I implemented the changes on Friday. Looking at it this morning the 
> broker is still consuming a large amount of memory, 30GBytes, but need 
> to at least give it until the end of the week before drawing any 
> conclusions. Interestingly I am no longer seeing any 64Mbyte blocks 
> and the cpu usage has gone up by 10% so it would appear the changes 
> have taken affect. Among all the allocations reported by pmap there is 
> now just a single massive allocation of 30G (which I assume is the heap).
>
> Hopefully the memory consumption will level off by the end of the week.
>
> Clive
>
> On 22/03/2017 21:12, Ted Ross wrote:
>> This reply was apparently lost in the email outage. Resending...
>>
>>
>> -------- Forwarded Message --------
>> Subject: Re: QPID C++ Broker MALLOC_ARENA_MAX
>> Date: Tue, 21 Mar 2017 08:28:35 -0400
>> From: Ted Ross <tross@redhat.com>
>> To: users@qpid.apache.org
>>
>> Hi Clive,
>>
>> We've seen this before and as I recall, it was specific to RHEL6 
>> (Centos6).
>>
>> Here's a post from Kim van der Riet from 2011 that summarizes the 
>> issue and a solution:
>>
>> http://qpid.2158936.n2.nabble.com/qpidd-using-approx-10x-memory-tp6730073p6775634.html

>>
>>
>> -Ted
>>
>> On 03/20/2017 05:55 PM, CLIVE wrote:
>>> Hi,
>>>
>>> Been a while since I last posted anything to the QPID newsgroup, mainly
>>> due to the excellent reliability of the QPID C++ broker, keep up the
>>> good work.
>>>
>>> But I am seeing a strange issue at a clients site that i thought I 
>>> would
>>> share with the community.
>>>
>>> A client is running a QPID C++  Broker (version 0.32) on a CENTOS 6.7
>>> virtualized platform (8 CPU, 32 cores, and 64G RAM) and is experiencing
>>> memory exhaustion problems. Over the course of 5-30 days the broker
>>> resident memory steadily climbs up until it exhausts the available
>>> memory and gets killed by the kernels OOM. The memory pattern follows
>>> that of a form of memory leak, but I've never seen this kind of 
>>> behavior
>>> before from a QPID C++ broker, and looking on JIRA that doesn't seem to
>>> be any known memory leak issues.
>>>
>>> The broker is running 10 threads, currently supporting 134 long lived
>>> connections, from a range of JAVA JMS (Apache Camel), C++ and Python
>>> Clients, with 25 user defined exchanges and about 100 durable ring
>>> queues All messages are transient.  About 20GBytes of data is pushed
>>> through the broker each data ranging from small little messages of 1K,
>>> to messages of around 100K.
>>>
>>> As the broker memory consumption climbs, a 'qpid-stat -g' gives a 
>>> steady
>>> state queue depth of about 125,000 messages totaling 660M-1Gbytes of
>>> memory. So its not a queue depth issue.
>>>
>>> Interestingly when I run pmap -x <qpid pid> I see lots and lots of
>>> 64MBytes allocations (about 400) with 300 additional allocations of 
>>> just
>>> under 64MBytes.
>>>
>>> Some searching on the web has turned up a potential candidate for the
>>> memory consumption issue, associated with the design change that was
>>> made to the glibc malloc implementation in glibc 2.10+ which introduced
>>> memory arenas to reduce memory contention in multi-threaded processes.
>>> The malloc implementation uses some basic math to work out how much
>>> total memory is allocated to a process, no of cores * sizeof(long) *
>>> 64Mb. So for our 64 bit system that would give 32*8*64Mb = 16G.
>>>
>>> Apparently other products have had similar memory issues when they 
>>> moved
>>> to RHEL 6 (CENTOS 6), from RHEL 5, as the newer OS used glibc 2.12. The
>>> use of the MALLOC_ARENA_MAX environment variable seems to be away of
>>> reducing the memory allocated to the process with a suggested value 
>>> of 4.
>>>
>>> Just wondered if any one else in the community had experienced a 
>>> similar
>>> kind of broker memory issue, and what advice, if any could be supplied
>>> to localize the problem and stop the broker chewing through 64G of RAM.
>>>
>>> Any help/advice gratefully appreciated.
>>>
>>> Clive
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>> .
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
> .
>


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Mime
View raw message