httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Paul J. Reder" <rede...@remulak.net>
Subject Re: chunking of content in mod_include?
Date Tue, 28 Aug 2001 02:25:22 GMT
Ryan Bloom wrote:
> 
> On Monday 27 August 2001 16:05, Brian Pane wrote:
> > In mod_include's find_start_sequence function, there's some code that
> > splits the current bucket if "ctx->bytes_parsed >= BYTE_COUNT_THRESHOLD."
> >
> > Can somebody explain the rationale for this?  It seems redundant to be
> > splitting the data into smaller chunks in a content filter; I'd expect
> > mod_include to defer network block sizing to downstream filters.  In the
> > profile data I'm looking at currently, this check accounts for 35% of the
> > total run time of find_start_sequence, so there's some performance to
> > be gained if the "ctx->bytes_parsed >= BYTE_COUNT_THRESHOLD" check can
> > be eliminated.
> 
> It is used to ensure that we don't buffer all the data in mod_include.  It isn't
> really done correctly though, because what we should be doing, is just continuing
> to read as much data as possible, but as soon as we can't read something, send
> what we have down the filter stack.
> 
> This variable, basically ensures we don't keep reading all the data until we
> process the whole file, or reach the first tag.

In what manner do you mean "as soon as we can't read something"? It is my understanding
that the bucket code hides reading delays from the mod_include code. If that is true
how would the mod_include code know when to send a chunk along? Are you saying the
bucket code should do some magic like send all buckets in the brigade up to the
current one? This would wreak havoc on code like mod_include that may be setting
aside or tagging buckets for replacement when the end of the tag is found.

This code was put in because we were seeing the mod_include code buffer up the entire
collection of buckets until an SSI tag was found. If you have a 200 MB file with an
SSI tag footer at the end of the brigade, the whole thing was being buffered. How do
you propose that this be done differently?

The only thing I can think of is to add to and check the byte tally at bucket
boundaries. We might go over the BYTE_COUNT_THRESHOLD, but the check wouldn't
happen on every byte and there wouldn't need to be a bucket split to send along
the first part. Is this what you mean?

Paul J. Reder

Mime
View raw message