httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cliff Woolley <>
Subject Re: Buckets destroy not cleaning up private structures?
Date Sun, 29 Apr 2001 03:16:29 GMT
On Sat, 28 Apr 2001, Justin Erenkrantz wrote:

> Somebody please tell me that this stupid keepalive bug isn't as
> simple as this patch:

Not quite.  The problem is in AP_MIN_BYTES_TO_SEND.  If the file bucket
has a little left over in it at the end of one request in a kept-alive
connection, the file actually needs to stay open until the beginning of
the next request is handled so that the packet size gets bumped up above
AP_MIN_BYTES_TO_SEND (which is currently 8K, btw).  So what needs to
happen is that at the end of the request, if any data is left over not
sent yet that WON'T get sent until the next request gets processed, that
data should be copied into a heap bucket so that the file can get closed.
At that point, the patch below also applies and is correct.

Make sense?

> -    if ((status = apr_file_open(&fd, r->filename, APR_READ | APR_BINARY, 0, r->connection->pool))
> +    if ((status = apr_file_open(&fd, r->filename, APR_READ | APR_BINARY, 0, r->pool))
>          ap_log_rerror(APLOG_MARK, APLOG_ERR, status, r,
>  		     "file permissions deny server access: %s", r->filename);
>          return HTTP_FORBIDDEN;
> This works for me.  And, it even makes sense.  Why are we opening the
> file from the connection's pool rather than our request pool?

See above.

> Now, I wonder if those connections pools are being freed.  As I said,
> in keepalive mode, all of the file descriptors are kept open until the
> process terminates.  Is that pool not getting freed somehow?  Off to
> track that down.  -- justin

If THAT's really happening, then we're even worse off than we thought...


   Cliff Woolley
   Charlottesville, VA

View raw message