httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Ames <grega...@raleigh.ibm.com>
Subject Re: cvs commit: apache-2.0/src/main http_core.c http_protocol.c
Date Wed, 04 Oct 2000 16:40:23 GMT
rbb@covalent.net wrote:

> >
> > Yes, the cost of allocating/destroying buckets could be greatly reduced.  But it
still
> > costs something, and you can't avoid setting some fields in each bucket.  Those
> > operations are likely to cause cache misses when the buckets are first stored into,
> > then again each time a filter references them many nanoseconds later.
>
> Why would we have a cache miss when referenceing the bucket?  I expect a
> cache miss the first time we store into a bucket, but not when referencing
> the data that we stored.

This may be beating a dead horse somewhat, so I'll try to be brief.  When Bill started
working on this, he was seeing literally thousands of bucket brigades for a index listing
of a directory with a fair number of files.  For a directory with one file, he showed me
over a hundred, each representing legitimate data much to my surprise.  With HTTP 1.1, half
of these are for chunk headers.

If we buffer at the top of the filter chain but don't coalesce until the core filter,
nearly all the chunk headers and the corresponding bucket structures go away with little or
no additional work.  But we still have on the order of a thousand buckets left for a
directory with a bunch of files.  On each ap_r* call we allocate a bucket structure,
initialize it, and hook it into the brigade. Then we unwind back to mod_autoindex's
handler.  It does it's thing and generates the next little string, then calls ap_r* again.
When it finally gets done generating the page, our hypothetical buffering code at the top
of the filter chain sends the brigade down.  Each filter runs thru the brigade, starting
with the first bucket.  But by now the first bucket is ancient history to the cache and we
are likely to miss.

Greg


Mime
View raw message