httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Graham Leggett <minf...@sharp.fm>
Subject Re: mod_proxy Cache-Control: no-cache=<directive> support Apache1.3
Date Fri, 08 Mar 2002 20:18:40 GMT
Bill Stoddard wrote:

> > - In mem_cache, objects need content-lengths. Partially cached objects
> > are fetchable, solving the load spike problem.
> 
> I think mem_cache should be able to cache (or begin caching) objects with unknown content
> length.  Perhaps by mirroring the content to a temp file on disk and promoting it to
> in-mem when the full content is received or garbage collecting it if it exceeds max cache
> object size thresholds. Many content generators (I am thinking servlets and JSPs) generate
> small (cacheable) responses but we may not know the length of these responses upon first
> entry to CACHE_IN.

Perhaps there is an alternative to this:

When mod_proxy fetches a request, and it is small enough to fit in it's
internal buffer (of say a configurable 64kb or whatever), and the
content-length is missing, a content-length should be added to the
stream by mod_proxy. (Let me check that proxy does this)

As a result, responses of up to a certain size will have content-lengths
added before they hit the cache, making them cacheable. Responses over
that size will have no content-length, will be chunked, and will not be
cached.

This way the cache can be kept simple (no cache without a
content-length) but small dynamic responses will become cacheable
through the addition of content-length.

Thoughts?

> Serving partially cached responses seems rather flaky to me. And as you alluded to,
> handling the case where you are serving a partially cached response that is subsequently
> abandonded is a really funky problem to solve cleanly. Need to give it some more
> thought...

If a response has a content-length, then the only time that response
will be abandoned is if the backend server flakes out. If this happens,
the front end response will be forced to flake out (probably by
connection closed).

If another request is shadowing this response, and this response flakes
out, then both the original request and the other request will both
flake out. I don't see this as a serious problem.

> To solve the backend load spike problem, it would be relatively straight forward to stall
> threads requesting partially cached objects (with a user defineable sleep time and retry
> period) to keep those threads from firing requests off to the backend servers.

The best (from the point of view of delivering content as fast as
possible to the client) way I think is for the shadowing threads to ship
all the cached content possible to the client as long as cached data is
available. If the shadowing thread runs out of stuff to send, it should
sleep again until there is more to send. A simple flag on the cached
file will tell whether this file is "finished" or not. Shadowing threads
will simply read as much as possible from the cached file, until the
cached file is marked as complete. Then the shadowing threads can signal
the transmit as being complete too.

Regards,
Graham
-- 
-----------------------------------------
minfrin@sharp.fm		"There's a moon
					over Bourbon Street
						tonight..."
Mime
View raw message