qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Robert Greig" <robert.j.gr...@gmail.com>
Subject Re: Java M3 Qpid broker memory consumption
Date Sat, 08 Nov 2008 13:44:13 GMT
2008/11/8 Keith Chow <keith.chow@xml-asia.org>:

> The cause is similar to this TCP congestion issue from the apache mina users
> list, http://mina.markmail.org/message/6q5t5gwdozypm6dk?q=byte%5B%5D+gc
> Is this expected behaviour with M3 java broker with slow client?

Thanks for doing this test. Now that you have described from the
profiler where the byte arrays are building up, it is clear what is
happening - I should have realised this sooner.

We had an analogous problem with "fast producers" where an app that
produced messages very quickly would quickly exhaust the heap on the
client. This is exactly the same situation - MINA buffers pending
requests. It is quite bad since it gives an entirely false impression
of the size of private queues in particular (such as those used in a
topic implementation).

This is definitely something we need to fix - I will start a thread on
qpid-dev to discuss since I know work was done on the client issue and
it may be straightforward to apply that technique on the broker.

In the meantime, one alternative solution that may work for you is to
modify the class
org.apache.qpid.server.protocol.AMQMinaProtocolSession. Adjust the
writeFrame method as follows:

public void writeFrame(AMQDataBlock frame)
        _lastSent = frame;

        _lastWriteFuture = _minaProtocolSession.write(frame);
        _lastWriteFuture.join(); // this will cause the sending thread
to block until the data has been written to the socket

As my comment indicates this will cause each call to writeFrame to
block until the data has gone out onto the socket.

This is not a particularly good solution from a performance
perspective since the broker will make a lot more system calls as a
result - particularly when you have smaller messages. However with a
low rate such as 200 messages/second this should be fine I think.

If you try this, you should see that messages do build up in the queues.

> As an interim solution, we've modified the broker to detect slow topic
> consumers (by inspecting expiry timestamp for our usecase) and kill them off
> (with mina protocol's close session call). This allowed
> GC to reclaim the dead client's memory resource.

It would be good to combine a fix like that (but instead inspecting
queue depths on private queues) with a proper fix to prevent the
"pending MINA event queues" from growing too large.

I will raise some jiras for this since it is an important issue.


View raw message