httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <...@covalent.net>
Subject Re: cvs commit: httpd-2.0/server core.c
Date Tue, 01 May 2001 18:04:31 GMT
On Tue, 1 May 2001, Bill Stoddard wrote:

> This patch is seriously broken.  Request a very large file (100MB or greater) and watch
what happens
> to memory usage.
>
> The problem is this loop. We basically read the entire content of the file into memory
before
> sending it out on the network. Haven't given much though on the best way to fix this.
>
> >   +                APR_BRIGADE_FOREACH(bucket, b) {
> >   +                    const char *str;
> >   +                    apr_size_t n;
> >   +
> >   +                    rv = apr_bucket_read(bucket, &str, &n, APR_BLOCK_READ);
> >   +                    apr_brigade_write(ctx->b, NULL, NULL, str, n);
> >   +                }

I don't see how that could happen.  We only enter that section of the
core_output_filter if we are saving some data off to the side for
keepalive requests.  In fact, we specifically do not enter this loop if we
are serving a file from disk.

The only way this could be the culprit, is if we have MMAP'ed the file,
and we didn't send it out for some reason.  I will try to look at this
today to see the problem.

Ryan

_______________________________________________________________________________
Ryan Bloom                        	rbb@apache.org
406 29th St.
San Francisco, CA 94131
-------------------------------------------------------------------------------


Mime
View raw message