httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Peter J. Cranstone" <Cranst...@worldnet.att.net>
Subject RE: Apache 164 percent speed increase
Date Wed, 13 Oct 1999 04:39:14 GMT
>> Whatever.  There has already been far too much attention given to your...
>>odd views about how compression should be done.  Hmm, I notice that your
>>solution still only works on IE running on Windows and that your demo of
>>how well your product works consists of comparing the size of a gzipped
>>file to a non-gzipped file.  Sorry, compression has actually been around
>>for a while and people have been gzipping files since the week before
>>last.

Might want to review your facts. Gzip content encoding is supported in MSIE
4.x and higher, along with Netscape 4.51 and higher. Compression has been
around for years, funny how useful it is and yet Apache has no native real
time support for it, until now that is. Our solution operates with these
browsers without the need for a client piece. Older browsers obviously
require a client side app. With regard to gzip. It is a useful compression
algorithm but sorely outdated when compressing HTML and XML documents. It is
slow and poorly designed for the dataset. We have already have designed
algorithms which outperform gzip's compression levels by an additional 8-10%

As for your "whatever" comment. Let's see, with compression turned on, the
server is faster, bandwidth is saved, the consumers experience is improved
and the web server transmission cycles are reduced. Please respond as to why
this is not a desired feature for any web server.

As for our views on compression. I respect your opinion and in the mean time
I suggest that you post your own views on how you would do compression (if
you are able to do so) or better still maybe you can tell us why it is an
unimportant feature for Apache.

Last time I checked, one of the biggest problems on the web is speed and as
for the foreseeable future at least 98% of the web will connect at 28K or
slower I see the need for more efficient software. Unless Apache makes
changes towards efficient delivery of documents, other servers which address
these problems will surely make significant inroads into your market share.
XML is significantly more bloated than HTML, Apache's native performance
will suffer greatly without significantly reducing the size of these
documents.

Personally I could care less what you think of our compression technology or
our approach. I have followed this forum for a year now and have watched
many, many interesting exchanges. The compression technology works, there is
a clear and significant benefit to using it. People who have poor low
bandwidth connections and those who pay for their bandwidth are always
interested in ways to save money.

Bottom line Mark. A compressed Apache server will 'always" outperform a non
compressed Apache server by a considerable margin. Guess your stuck in the
slow.

Cheers.


Peter J. Cranstone
Cranstone@RemoteCommunications.com
http://www.remotecommunications.com

-----Original Message-----
From: new-httpd-owner@apache.org [mailto:new-httpd-owner@apache.org]On
Behalf Of Marc Slemko
Sent: Tuesday, October 12, 1999 10:01 PM
To: new-httpd@apache.org
Subject: Re: Apache 164 percent speed increase

On Tue, 12 Oct 1999 TOKILEY@aol.com wrote:

>
> In the course of testing our new online real-time document
> compression server at http://www.rctp.com we have found a
> number of problems with regards to ApacheBench.
>
> RCI's online compression servers work with ANY version of
> ANY HTTP compliant server but when used in conjunction with
> Apache there is a 164 ( ONE HUNDRED AND SIXTY FOUR ) percent
> performance increase for Apache.

Whatever.  There has already been far too much attention given to your...
odd views about how compression should be done.  Hmm, I notice that your
solution still only works on IE running on Windows and that your demo of
how well your product works consists of comparing the size of a gzipped
file to a non-gzipped file.  Sorry, compression has actually been around
for a while and people have been gzipping files since the week before
last.

> * ApacheBench problem 1
>
> Line 778 of ApacheBench in Apache_1.3.9\src\support\ab.c
> is incorrect. If you add an 'Accept-Encoding: gzip, compress'
> header option via the command line it gets added to the BODY
> of the document and not the HEADER where it belongs.
>
> Relevant 'clip' from AB.C...
>
> /* setup request */
> if (!posting) {
> sprintf(request, "GET %s HTTP/1.0\r\n"
> "User-Agent: ApacheBench/%s\r\n"
> "%s" "%s" "%s"
> "Host: %s\r\n"
> "Accept: */*\r\n"
> "\r\n" "%s",       <---- Line 778 of AB.C is incorrect
> path,
> VERSION,
> keepalive ? "Connection: Keep-Alive\r\n" : "",
> cookie, auth, hostname, hdrs);
> }
>
> There is no need for a CVS 'diff' or a PATCH on this.
> It's just a simple mistake and needs a quick re-type
> on the part of someone who knows where the 'real' master
> source module is.

This is already fixed in the current CVS tree.

[...]

> While ApacheBench is adequate for most purposes and is
> a good 'standard benchmark'... real-time online document
> compression is here NOW and it is here to stay so ApacheBench
> really needs to start including some decompression code to come
> up with some 'truly' accurate stats.

Erm... I don't think so thanks.

> There needs to be a new 'result' field called
> 'Virtual Transfer Rate' which shows how many 'real' bytes
> were transferred after decompression.
>
> The 'Virtual Transfer Rate' field would show you how
> compressing mime types results in a 'virtual' kb/s rate that
> is much, much HIGHER than the 'actual' transfer rate which is
> the only thing currently reported by ApacheBench.
>
> When compressed text/xxxx is being transmitted the actual
> transfer time means very little... what is really important
> is how many 'uncompressed' bytes were received.
> THAT is the 'real' transfer rate from the user's perspective
> and will become the new relevant 'benchmark' figure
> in the very near future.

I don't think so.  If you can do x amount of traffic, then that is x
amount of traffic period.  If you compress it, you may need less traffic
to send the same thing but that isn't something that ab should care
about.

> * ApacheBench and PROXY servers.
>
> The only thing stopping ApacheBench from being used to test
> ANY Proxy Server as well as a Web Server is just a simple
> glitch in the 'parse_url()' routine that refuses to remove
> the forward slash from the command line URL if that URL is a
> fully qualified name such as 'http://www.somwhere.com/some.document'
>
> A simple fix to parse_url() that recognizes a 'proxy' request
> and removes the leading slash allows ApacheBench to be used
> to test ANY proxy server.

You would need to be a bit more explicit about what you are talking
about.  Posting diffs always helps.  How does fixing a "simple glitch
in parse_url" let you specify what proxy to use and what origin server
to talk to?


Mime
View raw message