qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fraser Adams <fraser.ad...@blueyonder.co.uk>
Subject more memory "leak" woes.
Date Fri, 03 Aug 2012 10:42:44 GMT
Hello all,
I've previously posted with a problem whereby in a federated set up 
we've been seeing brokers eat memory. In that case what was weird was 
that when the producer client was pointed to a broker on the same box 
we'd start to see problems, but when it was pointed to a broker on a 
remote box it largely seemed to be stable. It drove us nuts, but we were 
able to park it for a while. We've recently started seeing it again 
where we really had to use a co-located broker.

One thing that seems interesting is that it happens most when the 
consumer was in some way behaving  slower than it should.

We're publishing messages to amq.match and the queue the consumers 
receive from is a RING queue, so as far as I can see (intuitively at 
least!!) it shouldn't really matter if the consumer is slow - the 
circular queue should just overwrite the oldest messages.

One of my colleagues wrote a couple of little programs using the 
qpid::client API to reproduce a bursty producer and a slow consumer and 
this seems to reproduce the problem - basically despite a circular queue 
both qpidd and client-consumer memory consumption grows and I start 
swapping madly on my 4G box despite the queue only being 100MB.

I'm more familiar with qpid::messaging, so I tried to reproduce the 
problem using that, what's interesting is that my first attempt couldn't 
reproduce it - but of course the APIs behave differently. Then I started 
wondering about flow control/capacity.

I usually do setCapacity(500) - for no other reason than I think that's 
what the default is for the JMS API. Now with that I was using more 
qpidd memory than I'd expect with a 100M circular queue, but I then 
reduced it to 100 then 10 and realised that the capacity (which I 
thought related to prefetch on the client) was affecting both client and 
qpidd memory consumption. I also noticed that doing "link: {reliability: 
unreliable}" helped.


I tried enabling flow control in the client-consumer to no avail (I 
struggle to understand that API!!) - I thought adding

SubscriptionSettings settings;
settings.autoAck = 100;
settings.flowControl = FlowControl::messageCredit(200);

subscription = subscriptions.subscribe(*this, queue, settings);

to my prepareQueue() method would be the way to do it, but that just 
seemed to cause my consumer to hang after it had received 200 messages.


So my messaging-consumer can be made to behave in a way that at least 
makes some sense, but what concerns me is back to the problem of 
federated links - as I say we're seeing terrible resource leaks and I 
assume that the federation bridge code is closer to what qpid::client is 
doing, so I've no idea if we can configure a federated link to "honour" 
the behaviour that we'd expect from a circular queue (we use the default 
behaviour which *should* we using unreliable messaging).

hope this makes sense. I've attached the producer and two consumers that 
I've been using to try this stuff out.

I'd appreciate any thoughts and especially any mitigations, this is 
starting to cause us real problems.

MTIA
Fraser.


-------- Original Message --------
Subject: 	Re: C++ broker memory leak in federated set-up???
Date: 	Thu, 01 Mar 2012 15:04:04 +0000
From: 	Gordon Sim <gsim@redhat.com>
Reply-To: 	users@qpid.apache.org
Organisation: 	Red Hat UK Ltd, Registered in England and Wales under 
Company Registration No. 3798903, Directors: Michael Cunningham (USA), 
Mark Hegarty (Ireland), Matt Parsons (USA), Charlie Peters (USA)
To: 	users@qpid.apache.org



On 02/29/2012 07:07 PM, Fraser Adams wrote:
> Hi All,
> I think that we may have stumbled across a potential memory/resource leak.
>
> We have one particular set up where we have a C++ producer client (using
> qpid::client - don't ask, it's a long story.....) this writes to a 0.8
> broker hosted on the same server. That broker is then federated via a
> queue route to amq.match on another (0.8) broker. The queue route is a
> source route set up via qpid-route -s
>
> We've been having all sorts of fun and games with respect to
> performance, which we've narrowed down to some dodgy networking.
>
> However one of the other effects that we've noticed is that the broker
> co-located with the producer client eats memory. The queue for the queue
> route is 1GB but qpidd eventually grows to ~35GB and sends the whole set
> up into swap.
>
> So with respect to the network problem we're suspecting a dodgy switch
> somewhere, what is interesting is that when we checked with ethtool the
> NIC was reporting half duplex had been negotiated - ouch!!! hence why we
> suspect a dodgy switch somewhere.
>
> Now when the NIC was explicitly set to 100 base/T full duplex our
> performance rocketed and the broker on the producer system appears
> (touch wood) to have stable memory performance.
>
> What I'm suspecting is that the dodgy network link has been causing
> connection drop-outs and the broker is automatically reconnecting (logs
> are confirming this) and I'm thinking that there is a resource leak
> somewhere during the reconnection process.

https://issues.apache.org/jira/browse/QPID-3447 perhaps? Though I
wouldn't have expected that to cause such a large growth in memory.

Your sure there is no backed up queue anywhere?

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:users-subscribe@qpid.apache.org





Mime
View raw message