httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bill Stoddard" <>
Subject Re: mod_proxy Cache-Control: no-cache=<directive> support Apache1.3
Date Fri, 08 Mar 2002 17:48:40 GMT

> Bill Stoddard wrote:
> > mod_disk_cache does not require knowledge of content length. In principle, do you
> > this is a problem for a proxy cache provided we can gracefully detect and handle
> > where cache thresholds are being exceeded? What does squid and apache 1.3 do?
> I have no idea what squid does. Apache v1.3 only makes a cached object
> available after it has been downloaded completely, and I think only
> objects with content-lengths. This causes the problem of nasty load
> spikes hitting a backend server when cached content expires.
> I think the following logic is a compromise:
> - In mem_cache, objects need content-lengths. Partially cached objects
> are fetchable, solving the load spike problem.

I think mem_cache should be able to cache (or begin caching) objects with unknown content
length.  Perhaps by mirroring the content to a temp file on disk and promoting it to
in-mem when the full content is received or garbage collecting it if it exceeds max cache
object size thresholds. Many content generators (I am thinking servlets and JSPs) generate
small (cacheable) responses but we may not know the length of these responses upon first
entry to CACHE_IN.

> - In disk_cache, objects do not need content-lengths, but attempts to
> cache may be abandoned once the magic threshhold is reached.

> - As a result of the above possibility that downloads might be
> abandoned, partially cached objects should not be fetchable.
> Does this make sense?
> Is there a way you can see to make disk_cache support partial responses
> being fetchable?

Serving partially cached responses seems rather flaky to me. And as you alluded to,
handling the case where you are serving a partially cached response that is subsequently
abandonded is a really funky problem to solve cleanly. Need to give it some more

To solve the backend load spike problem, it would be relatively straight forward to stall
threads requesting partially cached objects (with a user defineable sleep time and retry
period) to keep those threads from firing requests off to the backend servers.

> Regards,
> Graham
> --


View raw message