apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "peter baer" <peterb...@gmail.com>
Subject Re: apr_memcache 1.3.X sending unnecessary QUIT to memcached server
Date Tue, 07 Oct 2008 22:08:49 GMT
On Fri, Oct 3, 2008 at 3:48 AM, Ruediger Pluem <rpluem@apache.org> wrote:

> I guess the documentation is wrong. This behaviour was changed a while
> ago to the behaviour you describe: If a resource is expired it will not
> be handed out any longer but it will be closed. This is needed / useful
> e.g. for database connections that become faulty after some time because
> the database server closes them after some time of inactivity or a firewall
> in between breaks the connection after some time. It is also very useful
> when pooling persistent http connections and you know the keepalive
> timeout settings of your backend.

With the current revision of apr_util (apr_memcache) we were
encountering a memory leak issue and sub-optimal behavior in the
following scenario:

A massively parallel application (several hundred processing threads),
each needing (during message processing) to make several memcache
queries every second. A patch was applied recently that effectively
handles memory leaks in the apr_memcache module by calling free on the
APR Pool from the Resource List destructor every TTL microseconds.
This put users of the memcache module in some what of a bind, they are
forced to choose between:
- Setting the TTL to a high value and leaking memory under high load
circumstances (the TTL is never hit because each socket is being used
frequently)
 As a side note we considered modifying the socket selection algorithm
to incrementally move through the socket list (currently it selects
the first available socket), this would allow us to set a large TTL
(reduce TCP connection overhead) but necessitates a large minimum
number of connections to the memcache server
- Setting the TTL to a low value (hundreds or thousands of
microseconds) and having to reconnect to the memcache server for each
request

What we are trying to accomplish is having a large number (50-100) of
persistent connections to the memcache server(s).  This is not
currently possible without leaking large amounts of memory.

The reason the current stable revision of apr_memcache is leaking
memory is due to its use of a Bucket Brigade structure inside the
socket/conn structure, it copies the data off of the socket buffer
into this Bucket Brigade and then copies onto the APR Pool specified
in the apr_memcache_getp function call.  The data stays in the Bucket
Brigade until the TTL expires on the socket/conn in which case both
the socket is torn down and the Bucket Brigade is cleared.
Our fix simply removes this extra step and bypasses the Bucket
Brigade, instead copying the data directly to the pool specified in
the apr_memcache_getp function call.

This has worked great for us, no more memory leaks and very few TCP
reconnections (TTL at 120 second, 120,000,000 micro seconds),
hopefully it works well for everyone else in the community :)

Credits for this patch go to:
Tavis Paquette (tavis at galaxytelecom dot net)
Peter Baer (pbaer at galaxytelecom dot net)

Mime
View raw message