httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Graham Leggett <>
Subject Re: More thoughts on caching in v2.0 - filtering
Date Tue, 22 Aug 2000 16:29:35 GMT wrote:

> But... about 2 seconds after that someone will want it to
> look like this...
> >  - Put data into the cache ( and compress it at the same time )
> >  - Take data out of the cache ( and then do the following... )
> A. See if the User-Agent meets real-world criteria for accepting
> Content-encoding and/or any form of real-time compression.
> B. See if the entity requested meets local criteria for being
> delivered compressed ( Big enough to bother with, right mime type,
> yada, yada ).
> C.Either send the compressed cache goodie back 'as-is' or, ir
> compression is not needed or warranted... DECOMPRESS the
> cached entity and send it back.

This is making things more complicated than they need to be. One of the
restrictions of the existing mod_proxy is that only one representation
of an object is cacheable at a time. The plans are to remove this

If a compressed data stream comes past the caching filter, we cache the
compressed data stream. If a normal data stream comes past the filter,
we cache that as well, as a separate cache entry. If a French data
stream comes past the caching filter, we cache that as well, same story.
The cache handler in turn will use the headers on the request (like
Vary, COntent-Encoding, etc etc) and content negotiation to deliver the
correct cached entity to the client. If the correct entity is not in the
cache, then the cache handler will give up and let the real handler
handle it, to be cached downstream by the cache filter.

Caching different variations of the same object at the same time will
use more RAM but will also be a lot simpler and less processor intensive
than trying to uncompress or compress something on the fly. Simple =
less bugs, less headaches.


View raw message