httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Orton <jor...@redhat.com>
Subject Re: huge memory leak in 2.0.x
Date Tue, 15 Jun 2004 14:23:42 GMT
http://issues.apache.org/bugzilla/show_bug.cgi?id=23567

On Mon, Jun 14, 2004 at 01:45:26PM -0600, Brad Nicholes wrote:
>    Actually I think this was addressed quite a while ago with the
> introduction of the MaxMemFree directive.  This problem sounds a lot
> like the bucket issue where memory allocated for the bucket brigade that
> pushes the data through from the CGI app to the wire, simply held on to
> the memory assuming that it would be needed later.  The MaxMemFree
> directive allowed the memory pool manager to release excess memory
> rather than allowing it to hang around.

The problem here looks exactly as Jeff diagnosed in the bug report: for
each apr_brigade_split() call by the content-length filter, a new
cleanup is allocated out of the request pool.  So for this case, you
leak 32 bytes for every 4K block produced by the CGI script as it is
being consumed faster than it's being produced, and read() keeps
returning EAGAIN from the pipe, and the filter keeps wanting to flush.

It's actually very easy to fix that leak by putting used cleanup
structures onto a freelist and re-using them, e.g.:
http://cvs.apache.org/~jorton/apr_cleanfree.diff

But that doesn't solve the problem: apr_brigade_split() pallocs another
32 bytes for the new brigade structure itself, so there's still a leak.

My naive attempts to allocate the brigade structure using either the
bucket allocator or even malloc/free broke everything in strange ways. 
So, calling bucket gurus again :) - are there fundamental reasons why
the former is not going to work?  Any better ideas?

joe

Mime
View raw message