httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: Compression via content negotiation
Date Thu, 03 Dec 1998 20:36:16 GMT

>  Paul Ausbeck writes...
>  I have come to the conclusion that content negotiation is not the place
>  for compression.

Considering the many issues surrounding the use of content-encoding to
deliver ANY kind of compressed file ( not just gzip ) in conjunction with
all the issues regarding multi-layered compression passes and the 
'shared duty' issues of recording document language content as well...
I might have to give you a +1 on this.

As a side note... Consider this seemingly simple issue...

An HTML document has an explicit 'expires' time designated thus...

<meta http-equiv="expires" content="day, x mon year hh:mm:ss GMT">

Current plans for static-compression using content-encoding would invlove
compressing the entire HTML document... meta tags included. gzip does
one thing and it does it very welll... but it does NOT leave 'parts' of the
document or bit stream 'uncompressed'. It's all or nothing.

Even if there is an 'uncompressed' version of the same URI sitting on
the server... how will the server 'know' if the compressed and uncompressed
versions match with regard to "expires" time. Keep in mind that the
'meta expires'  is an HTML thing and might have nothing to do with the
actual system time/date so simply checking the time/date of the file
won't give you much to go on.

One answer might seem to be... well... anyone offering statically compressed
files that rely on 'meta expires' are required to keep an original copy around
and they are also required to make sure the uncompressed version bears 
the exact same 'meta tag' somewhere inside the compressed data.
A lot of  maintenance work and lots of room for human error.

If someone 'forgets' to make sure the 'uncompressed' version with the
only 'meta expires' tag visible to the server of proxy cache software
is always up-to-date then... whoops! Who knows what the user might
get via a 'normal' IF-MODIFIED-SINCE or IF-EXPIRED query.

Consider proxy-caches only. So the client asks for a URI and
the server sends the 'gzip' version down to the proxy-cache. Does 
it also send the 'required' 'uncompressed' file so now the proxy-cache
will be able to accurately respond to an 'IF-EXPIRED' query? I don't
think so. If proxies ever get an 'IF-EXPIRED' request on a completely
compressed document does it then always have to consult up the
tree somehow to get to the 'original' document and find out if it's
really expired or not? Brain hurt.

It would seem the only real answer to this one is to somehow allow
'gzip' to retain meta-tags ( and other things ) uncompressed at the top of the
compressed document. That's a re-write for the zlib folks, or something.

Maybe special modules really do offer the only hope of dealing with
ALL the issues.

View raw message