httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <gst...@lyra.org>
Subject Re: chunking of content in mod_include?
Date Tue, 28 Aug 2001 07:49:47 GMT
On Mon, Aug 27, 2001 at 08:24:50PM -0700, Ryan Bloom wrote:
>...
> > This code was put in because we were seeing the mod_include code buffer up
> > the entire collection of buckets until an SSI tag was found. If you have a
> > 200 MB file with an SSI tag footer at the end of the brigade, the whole
> > thing was being buffered. How do you propose that this be done differently?
> 
> I don't care if mod_include buffers 200 Megs, as long as it is constantly doing
> something with the data.  If we have a 200 Meg file that has no SSI tags in
> it, but we can get all 200 Meg at one time, then we shouldn't have any problem
> just scanning through the entire 200 Megs very quickly.  Worst case, we do what
> Brian suggested, and just check the bucket length once we have finished
> processing all of the data in that bucket.  The buffering only becomes a
> real problem when we sit waiting for data from a CGI or some other slow
> content generator.

It becomes a problem if you load too much of that file into the heap. Thus,
we have the threshold.

The threshold continues to make sense. Using a bucket boundary is a very
good optimization. If a FILE bucket manages to map the whole sucker into
memory, and returns one massive 200M memory buffer to scan ... fine. But
after that scan, we probably ought to deliver it down the stack :-)

But if that FILE continues spitting out 8k HEAP buffers, then we also need
to flush them out of memory.

Brian's recently posted patch looks good for this.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

Mime
View raw message