httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Kew <>
Subject Re: svn commit: r468373 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h modules/cache/mod_mem_cache.c
Date Mon, 30 Oct 2006 12:44:46 GMT
On Mon, 30 Oct 2006 14:03:03 +0200 (SAST)
"Graham Leggett" <> wrote:

> On Mon, October 30, 2006 12:57 pm, Nick Kew wrote:
> >> The current expectation that it be possible to separate completely
> >> the storing of the cached response and the delivery of the content
> >> is broken.
> >
> > Why is that?
> Because:
> - the cache_body() hook is expected to swallow an entire brigade
> completely and write it to cache completely before this brigade is
> written to the network.
> In the case of files, that means one brigade, containing one bucket,
> containing one entire file. For a 4.7GB DVD ISO file, that means many
> minutes before the response starts arriving at the client, which has
> timed out at this point.

Hang on!  Where's the file coming from?  If it's local and static,
what is mod_cache supposed to gain you?  And if not, what put it
in a (single) file bucket before it reached mod_cache?

> - apr_bucket_read() assumes that a bucket will only ever be read once.
> In so doing, it may morph buckets into heap buckets while reading,
> when buckets are too large to be read in one go. This behaviour is
> undocumented (I plan to fix that).

Yes.  But what is reading them?

> If these heap buckets are not immediately deleted, they will last the
> lifetime of a request. They are not deleted in mod_disk_cache because
> later, we need to write these same buckets to the network. Out of
> memory ensues.

If mod_disk_cache gets a single file bucket as input, does it
actually need to read the file?  It can send the file bucket
down the chain as-is, having given it a filesystem entry in
cache space.

OK, that falls down if the cache's filespace is not on the same
disc as the file bucket.  But that in itself is a major overhead,
and begs my first question: what is mod_cache supposed to gain?

Mod_cache fronting a jukebox?  Right, then you do want to copy
the file: can't the cache filter itself pass buckets as it reads
them?  Of course it can.  But just because this case exists
doesn't mean the cache filter should insist on reading every
file bucket it gets!

OK, how about this for an alternative: introduce an apr_bucket_clone
method, that works by reference-counting and lazy copying, and
in the case of a file bucket, asynchronous copying.  The filter
can clone the bucket, pass one copy on immediately, and save the
other: then the save will actually read the file if and only
if it's copying between filesystems, and the filter chain can
use sendfile.

I haven't thought this through: I put it forward as the kind of
proposal that might fix the problem without breaking Justin's

> Previous discussion is just noise, it would be better if I explain
> again.


> > That suggests broken or implementation and/or inappropriate usage.
> > It says nothing about expectation.
> Sorry, but when Google buys YouTube for a Googol dollars, the argument
> that nobody wants to serve large files makes no sense.

Nobody said that.

> The existing mod_cache, regardless of configuration, and regardless of
> cache disk size, can under no circumstances cache a file bigger than
> available RAM.
> This is well and truly broken.

Really?  So if a DVD image comes in 8K chunks from mod_proxy,
mod_cache is going to buffer everything?  Erm .... why?

Are you saying mod_cache enforces that?  Or mod_disk_cache?
In the latter case, there's always the option of introducing
a new provider for large files.

> The patches were posted to this dev list a long time ago, and nobody
> took any time to review them. I see no reason why anybody is going to
> review patches going in on some parallel dev branch either.

OK, I plead guilty to not reviewing them.  Did you motivate review
by accompanying them with an explanation (as above) of what
brokenness they fixed?

Nick Kew

View raw message