httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <ma...@znep.com>
Subject Re: Apache 164 percent speed increase
Date Wed, 13 Oct 1999 04:01:22 GMT
On Tue, 12 Oct 1999 TOKILEY@aol.com wrote:

> 
> In the course of testing our new online real-time document
> compression server at http://www.rctp.com we have found a
> number of problems with regards to ApacheBench.
> 
> RCI's online compression servers work with ANY version of
> ANY HTTP compliant server but when used in conjunction with
> Apache there is a 164 ( ONE HUNDRED AND SIXTY FOUR ) percent
> performance increase for Apache.

Whatever.  There has already been far too much attention given to your...
odd views about how compression should be done.  Hmm, I notice that your
solution still only works on IE running on Windows and that your demo of
how well your product works consists of comparing the size of a gzipped
file to a non-gzipped file.  Sorry, compression has actually been around
for a while and people have been gzipping files since the week before
last.

> * ApacheBench problem 1
> 
> Line 778 of ApacheBench in Apache_1.3.9\src\support\ab.c
> is incorrect. If you add an 'Accept-Encoding: gzip, compress'
> header option via the command line it gets added to the BODY
> of the document and not the HEADER where it belongs.
> 
> Relevant 'clip' from AB.C...
> 
> /* setup request */
> if (!posting) {
> sprintf(request, "GET %s HTTP/1.0\r\n"
> "User-Agent: ApacheBench/%s\r\n"
> "%s" "%s" "%s"
> "Host: %s\r\n"
> "Accept: */*\r\n"
> "\r\n" "%s",       <---- Line 778 of AB.C is incorrect
> path,
> VERSION,
> keepalive ? "Connection: Keep-Alive\r\n" : "",
> cookie, auth, hostname, hdrs);
> }
> 
> There is no need for a CVS 'diff' or a PATCH on this.
> It's just a simple mistake and needs a quick re-type
> on the part of someone who knows where the 'real' master
> source module is.

This is already fixed in the current CVS tree.

[...]

> While ApacheBench is adequate for most purposes and is
> a good 'standard benchmark'... real-time online document
> compression is here NOW and it is here to stay so ApacheBench
> really needs to start including some decompression code to come
> up with some 'truly' accurate stats.

Erm... I don't think so thanks.

> There needs to be a new 'result' field called
> 'Virtual Transfer Rate' which shows how many 'real' bytes
> were transferred after decompression.
> 
> The 'Virtual Transfer Rate' field would show you how
> compressing mime types results in a 'virtual' kb/s rate that
> is much, much HIGHER than the 'actual' transfer rate which is
> the only thing currently reported by ApacheBench.
> 
> When compressed text/xxxx is being transmitted the actual
> transfer time means very little... what is really important
> is how many 'uncompressed' bytes were received.
> THAT is the 'real' transfer rate from the user's perspective
> and will become the new relevant 'benchmark' figure
> in the very near future.

I don't think so.  If you can do x amount of traffic, then that is x
amount of traffic period.  If you compress it, you may need less traffic
to send the same thing but that isn't something that ab should care
about.

> * ApacheBench and PROXY servers.
> 
> The only thing stopping ApacheBench from being used to test
> ANY Proxy Server as well as a Web Server is just a simple
> glitch in the 'parse_url()' routine that refuses to remove
> the forward slash from the command line URL if that URL is a
> fully qualified name such as 'http://www.somwhere.com/some.document'
> 
> A simple fix to parse_url() that recognizes a 'proxy' request
> and removes the leading slash allows ApacheBench to be used
> to test ANY proxy server.

You would need to be a bit more explicit about what you are talking 
about.  Posting diffs always helps.  How does fixing a "simple glitch
in parse_url" let you specify what proxy to use and what origin server
to talk to?


Mime
View raw message