httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: Apache 164 percent speed increase
Date Wed, 13 Oct 1999 11:55:57 GMT
In a message dated 99-10-13 07:15:57 EDT, you write:

>  All of my testing - albeit a year ago - says real time compression
>  'slows the server down' - considerably.  And Yes, I actually can get
>  my solution to work with MacOS and Unix (not just on Windows); scary huh?

Not scary at all. We work with EVERYTHING as well... All versions
of UNIX, all versions of HTTP compliant Server... and ALL BROWSERS
on ALL workstation platforms that support Content-encoding.

What was your test environment and what software were you using?
It's pretty hard to make any judgements at all about your 'test results'
without even the slightest idea what you were doing.
>  Until clients get 'universally' faster - not just those special cases
>  where some people have to have 600MHz with cable modems at home -

Do you not realize that the physical design and implementation
of 'cable modems' simply validates the need for compressing
content? Have you ever been to an apartment building where
there are 100 people sharing a cable and they all ask for CNN's
90,000 byte home page at the same time? The available bandwidth
gets diced up every which way from Sunday and you would swear
you are back on a 28.8k modem in terms of actual 'response time'.

Cable modems ( and/or ) DSL are just band-aids. The wound remains.

Too many bytes of totally redundant and unnecessary content
eating up too much bandwidth.

>  I will always decide against using 'real-time' compression in my
>  business environment. 

Key word is 'your' environment. Of course you will always 
'do want you want' in 'your' environment. The key is to have
options and make intelligent choices based on things that
DO, in fact, work.... whether you choose to use them or not.

>  I haven't seen any benefit with file sizes
>  larger than 5MB and compressing a bunch of smaller files on the
>  fly really does not work out too well - not on an Ultra 1 anyways -
>  maybe when I get my E10k then we can look at it again then
>  (w/4 400MHz UltraSparcs who couldn't do it in real time? :)

Again.. some pretty fuzzy logic. We have compressed 5MB+
files ( we are obviously not talking about HTML at this point )
as much as 80 percent. I myself certainly see the benefit of
sending/receiving 1 megabyte versus 5. I guess I don't 
understand why you don't.

With regards to 'small files' and 'does not work out to well'
I don't think whatever software you were using was worth
much. Our testing shows an OBVIOUS cumulative benefit
when files are numerous and small ( Like they are on the Internet ).

>  Bottom-line:
>  You 'might' be able to compress data at the httpd request, but the
>  client decompressing it is gonna hate you - especially those that
>  are already over burdened with more important things -
>  like playing 'Tribes'...

Huh? We have tested the 'clients' ability to 'handle' the 
inbound compressed data and we see no problems 
whatsoever in this area. It works the same for the client
as it does for the Server... decompressing a small amount
of data and finishing the transaction as quickly as possible
versus spinning the TCP/IP sub-layer for more than twice
the time to receive the uncompressed data has the same
effect on the client that it does on the server... It allows
the client to get 'back to business' that much FASTER and
actually INCREASES worstation efficiency.

It all comes down to 'the code'. An improperly written
compression algorithm is as dangerous to resource
availability as an improperly written communciations
sub-layer. Both have the 'potential' to bring a machine
to its knees. However... If BOTH are done properly
you will see nothing but an INCREASE in overall throughput
and performance.

Kevin Kiley
CTO, Remote Communications, Inc.
RCTPDs real-time online document compression server.

View raw message