httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter J. Cranstone" <>
Subject RE: Volunteering for enhancing apache
Date Thu, 26 Apr 2001 13:34:44 GMT
>> We would still like it to take bit further and figure out a solution
which is more bandwidth effective and provides faster >> access.

Why not add mod_gzip to Apache. Compress the output prior to encrypting and
then use mod_ssl for security. The technique to understand how to configure
this can be found in the mailing archives for the mod_gzip forum:

Remember you need HTTP 1.1 compliant browsers at the client side. On the
server side Apache 1.3.9 or 1.3.12 is the most stable, although it
(mod_gzip) has been tested with all versions.


Peter J. Cranstone

-----Original Message-----
From: []
Sent: Wednesday, April 25, 2001 11:05 PM
Subject: Volunteering for enhancing apache

We are working on figuring out efficient ways of secure web browsing
with satellite as the transmission medium. If the browser/origin server
supports  only HTTP 1.0, it would result a major overhead in terms of SSL
handshake for each TCP connection being set up for getting the HTML as
well as the embedded gifs etc

Though HTTP 1.1 provides some respite as it supports persistent connections
and pipelining. We would still like it to take bit further and figure
out a solution which is more bandwidth effective and provides faster access.

This is keeping in mind that SSL handshake cant be spoofed...........Any

Our idea about enhancing HTTP protocol:

The way today browsing works is on a request/response paradigm for each
resource in a URI.Like if a html page has 4 gifs, browser sends 5 GET
for the browser and 4 for the gifs in it. I am wondering why cant the web
goes a step further and scans the URI for the embedded gifs and other
type resources and sends them across on the same TCP connection in
This would save 4 extra GET requests and hence
would reduce the network traffic therby providing a more bandwidth effective
mechanism of normal
web browsing...The only shortcoming which comes to my mind as of no is the
in which if the connection breaks down, server has to transmit all the stuff
over again... well, to overcome this demerit, i think we can add couple of
headers in the protocol to acheive 'resume' kind of functionality.

I am actually wondering about whether its too petty an idea which got
by the masses or
didnt spring up in anybodys mind...

Any comments................

Apart from all this we would also like to contribute to the developement of
apache web server to bring it as close to the best (in terms of HTTP 1.1
compliance etc) as possible........

View raw message