httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From j.@pa.dec.com (Jim Gettys)
Subject Re: Compression via content negotiation
Date Thu, 03 Dec 1998 22:19:38 GMT

> Sender: new-httpd-owner@apache.org
> From: "Roy T. Fielding" <fielding@kiwi.ics.uci.edu>
> Date: Wed, 02 Dec 1998 16:22:12 -0800
> To: new-httpd@apache.org
> Subject: Re: Compression via content negotiation
> -----
> >> Yep, and that part isn't going to change.  There must always be a
> >> distinction within our httpd namespace between URLs that are negotiation
> >> handles and those that are not.  Failure to maintain this distinction
> >> completely screws over both caching and authoring of resource
> >> representations.
> >
> >I'm not so sure that's a good idea. RIGHT NOW a site using MS's server
> >can be configured to transparently encode html files. Static, dynamic,
> >and lazy compression can all be configured. Current versions of both IE
> >and Navigator work correctly with MS's server. Compression can be
> >applied even to urls that explicitly exist, not just negotiation
> >handles.
> 
> Of course it can.  HTTP/1.1 allows compression at three different
> levels -- media type, content encoding, and transfer encoding.  The first
> two are properties of the requested resource and the third is something
> that can be added on the fly regardless of the resource.  A resource is
> a conceptual identity that is realized by the origin server (Apache)
> establishing a mapping from a name (URI) to a set of representations
> (documents), where the representation may be a file or generated on
> the fly.  If the requested URI corresponds to a file, then the resource
> that the user requested is the direct mapping to that file and nothing
> else, and thus we cannot add a content-encoding to it just because you
> might want to save a few bits.  We could add a transfer-encoding if the
> request chain is capable of handling it, but nobody implements that yet
> and it would be hell to add to Apache 1.3.x (hence, it is a 2.0 issue).

Roy, I have to take issue with this one.  It would be difficult to implement 
compression transfer encoding in Apache 1.3.x where 
all content is pre-compressed.  But compress on the fly is not so painful.

But if you go back to previous discussions and links, you'll see that 
it is perfectly feasible to compress most things on the fly (each time 
the document is accessed, wasteful as this is), for all but the highest 
volume web servers.  Our russian friend implemented this, and it was clearly 
a win for his bandwidth limited web site.  Think about it; the compression 
algorithms are more than fast enough to saturate the network links that 
connect most servers to the Internet (e.g. T1), on today's processors.

Since it is for most short documents only of order a few milliseconds to
do compression on the fly 
(see http://rufus.w3.org/veillard/Compression/Compression.html).

For alot of people this is a win.

I'd take the easy way out: do the compression on the fly for 1.3.X, and
then do the compression in a precomputed way in 2.X.

And the mozilla folks have been working in the area, so I think it
is worthwhile.  They are more than a bit interested in the observed
performance gain of compression (for their naive test, they got
30%).
				- Jim



Mime
View raw message