httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter J. Cranstone" <Cranst...@worldnet.att.net>
Subject RE: Apache 164 percent speed increase
Date Wed, 13 Oct 1999 14:50:35 GMT
Bill....

All of your testing..... <snip>

I agree. We had exactly the same problem. Real time on the fly compression
with Apache forking 100's of requests is very painful. It's not the way to
do it.

With regard to file size. The biggest HTML file we've seen is around 750K (a
huge table from MS) most files average around 50K or maybe higher. These can
all be compressed in milliseconds. We haven't seen a 5MB file yet, but if we
did, that 5MB file would require 23 minutes of transmission time (28K),
which would some Apache child process to run for 23 minutes eating up CPU
cycles. Now lets compress the file on the fly. For arguments sake lets say
it takes 10 seconds, an eternity on a web server. It's now on average 78%
smaller, or about 1.1MB in size. Transmission time falls to 5 minutes (28K
modem). In essence the child process is over with a savings of 18 minutes.

We all know a compressed file is going to be delivered faster than a
uncompressed file (vagrancies of a connectionless protocol noted) what is
ABSOLUTELY KEY is how can we achieve GOOD performance on the server.

If I give you an application which bends your server in half but compresses
the data, I would expect you to tell me to move on. No, the design on the
code is critical, and that's what's taken us a year to discover. That's why
we went to great lengths to use AB, those numbers you've seen are correct.
You also know that if the server was really suffering under an incredible
load that the number of TPS would not have doubled.

Run ab on your system with content encoding and gzip compression enabled and
compare the stats with our compression server. I'm willing to bet your
results will be "rough" which is EXACTLY what we first saw. We've redesigned
the whole approach and the stats now verify the correct way to go

Regards,

Peter J. Cranstone
Cranstone@RemoteCommunications.com
http://www.remotecommunications.com

-----Original Message-----
From: new-httpd-owner@apache.org [mailto:new-httpd-owner@apache.org]On
Behalf Of Bill Jones
Sent: Wednesday, October 13, 1999 5:15 AM
To: new-httpd@apache.org
Subject: Re: Apache 164 percent speed increase

(Followed this thread for awhile...)

> 'You can come quietly... or you can make it hard on yourself...
> which do you choose?'


I have to choose the hard way  :)

All of my testing - albeit a year ago - says real time compression
'slows the server down' - considerably.  And Yes, I actually can get
my solution to work with MacOS and Unix (not just on Windows); scary huh?

Until clients get 'universally' faster - not just those special cases
where some people have to have 600MHz with cable modems at home -
I will always decide against using 'real-time' compression in my
business environment. (I haven't seen any benefit with file sizes
larger than 5MB and compressing a bunch of smaller files on the
fly really does not work out too well - not on an Ultra 1 anyways -
maybe when I get my E10k then we can look at it again then
(w/4 400MHz UltraSparcs who couldn't do it in real time? :)

Bottom-line:
You 'might' be able to compress data at the httpd request, but the
client decompressing it is gonna hate you - especially those that
are already over burdened with more important things -
like playing 'Tribes'...

my $bits = 2; #worth...
-Sneex-  :]
(2 bits is 25 cents... Inflation - who knew?)


Mime
View raw message