httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bill Stoddard" <b...@wstoddard.com>
Subject Re: cvs commit: httpd-2.0/server core.c
Date Tue, 01 May 2001 18:16:59 GMT

> On Tue, 1 May 2001, Bill Stoddard wrote:
>
> > This patch is seriously broken.  Request a very large file (100MB or greater) and
watch what
happens
> > to memory usage.
> >
> > The problem is this loop. We basically read the entire content of the file into
memory before
> > sending it out on the network. Haven't given much though on the best way to fix
this.
> >
> > >   +                APR_BRIGADE_FOREACH(bucket, b) {
> > >   +                    const char *str;
> > >   +                    apr_size_t n;
> > >   +
> > >   +                    rv = apr_bucket_read(bucket, &str, &n, APR_BLOCK_READ);
> > >   +                    apr_brigade_write(ctx->b, NULL, NULL, str, n);
> > >   +                }
>
> I don't see how that could happen.  We only enter that section of the
> core_output_filter if we are saving some data off to the side for
> keepalive requests.  In fact, we specifically do not enter this loop if we
> are serving a file from disk.

Attach a debugger and watch what happens.  I am seeing the following buckets...

1 heap bucket containing the headers
1 file bucket with the file descriptor
1 eos bucket

The following code is hit and we enter the conditional because the last bucket was an eos
and the
connection is keep-alive.

if ((!fd && !more &&
             (nbytes < AP_MIN_BYTES_TO_WRITE) && !APR_BUCKET_IS_FLUSH(e))
            || (APR_BUCKET_IS_EOS(e) && c->keepalive)) {

I think the logic in the conditional is just wrong.


Bill


Mime
View raw message