httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: Apache 164 percent speed increase
Date Wed, 13 Oct 1999 00:54:09 GMT

Hi Marc.
Kevin Kiley here.
Thanks for the quick response.

You've been awfully quiet lately.
Glad to know you are still in there pitching.

In a message dated 99-10-13 00:02:46 EDT, you write:

>  > RCI's online compression servers work with ANY version of
>  > ANY HTTP compliant server but when used in conjunction with
>  > Apache there is a 164 ( ONE HUNDRED AND SIXTY FOUR ) percent
>  > performance increase for Apache.
>  Whatever.  There has already been far too much attention given to your...
>  odd views about how compression should be done.  

Show me one other product in the world that is compressing Internet
documents 'on the fly'... GZIP or anything else, based on actual sensing
of the user agent capabilities and conforms to RFC standards. You
won't find one. Everyone talks about doing it but talk is cheap.
It is here now... we have already done it. We welcome any news about
ANY other server product that can do the same so we can test against it.
Competition is good. It's lonely being the 'only one in the world'.

>  Hmm, I notice that your solution still only works on IE running on Windows 

Sorry, wrong. Works on ALL platforms, ALL browsers... even LYNX.
The benchmarking is for a true real-time online compression server.
Runs on ANY Unix including Solais Sun Sparc Station. Works with ANY
HTTP compliant server and any browser that supports Content-encoding.

> ...and that your demo of how well your product works consists 
> of comparing the size of a gzipped file to a non-gzipped file.

I'm sorry if you don't understand the table headings on the performance
benchmarks or what the result items actually mean. It's your own un-altered
'ApacheBench' output. If you are confused about what the table is really 
saying please let me know exactly which test run you don't understand.
The test runs in highlight show that same transactions were completed
in less than half the time it would 'normally' take. What don't you get?

>  > * ApacheBench problem 1
>  > 
>  > Line 778 of ApacheBench in Apache_1.3.9\src\support\ab.c
>  > is incorrect. If you add an 'Accept-Encoding: gzip, compress'
>  > header option via the command line it gets added to the BODY
>  > of the document and not the HEADER where it belongs.
>  > 
>  > Relevant 'clip' from AB.C...
>  > 
>  > /* setup request */
>  > if (!posting) {
>  > sprintf(request, "GET %s HTTP/1.0\r\n"
>  > "User-Agent: ApacheBench/%s\r\n"
>  > "%s" "%s" "%s"
>  > "Host: %s\r\n"
>  > "Accept: */*\r\n"
>  > "\r\n" "%s",       <---- Line 778 of AB.C is incorrect
>  > path,
>  > keepalive ? "Connection: Keep-Alive\r\n" : "",
>  > cookie, auth, hostname, hdrs);
>  > }
>  > 
>  > There is no need for a CVS 'diff' or a PATCH on this.
>  > It's just a simple mistake and needs a quick re-type
>  > on the part of someone who knows where the 'real' master
>  > source module is.
>  This is already fixed in the current CVS tree.

Fantastic! Thank you. 
By 'current CVS tree' do you mean APR?
I do NOT see the fix in the 1.3.x tree.
>  > While ApacheBench is adequate for most purposes and is
>  > a good 'standard benchmark'... real-time online document
>  > compression is here NOW and it is here to stay so ApacheBench
>  > really needs to start including some decompression code to come
>  > up with some 'truly' accurate stats.
>  Erm... I don't think so thanks.

If you have gone to the trouble to allow the 'Accept-encoding: gzip,compress'
header to be part of the outbound testing then doesn't it make sense to
go the 'full monty' and actually DO THE DEED? Doesn't make much
sense to run a test to see if a Server is 'accepting' the field unless
you also verify that 'Content-encoding:' actually came back and that
the data is OK.

>  > There needs to be a new 'result' field called
>  > 'Virtual Transfer Rate' which shows how many 'real' bytes
>  > were transferred after decompression.
>  > 
>  > The 'Virtual Transfer Rate' field would show you how
>  > compressing mime types results in a 'virtual' kb/s rate that
>  > is much, much HIGHER than the 'actual' transfer rate which is
>  > the only thing currently reported by ApacheBench.
>  > 
>  > When compressed text/xxxx is being transmitted the actual
>  > transfer time means very little... what is really important
>  > is how many 'uncompressed' bytes were received.
>  > THAT is the 'real' transfer rate from the user's perspective
>  > and will become the new relevant 'benchmark' figure
>  > in the very near future.
>  I don't think so.  If you can do x amount of traffic, then that is x
>  amount of traffic period.  If you compress it, you may need less traffic
>  to send the same thing but that isn't something that ab should care
>  about.

I disagree totally. It is useful to know both the REAL and the VIRTUAL
byte counts at all times so comparisons can be made against 
servers that DO compress data. When Apache finally gets around to
really doing Content-encoding: and you are wondering how it
fares against other products that deliver compressed data I think
you will change your tune about this. It's a measure of how well
the server compresses data as WELL as how fast it can spit bytes.

>  > * ApacheBench and PROXY servers.
>  > 
>  > The only thing stopping ApacheBench from being used to test
>  > ANY Proxy Server as well as a Web Server is just a simple
>  > glitch in the 'parse_url()' routine that refuses to remove
>  > the forward slash from the command line URL if that URL is a
>  > fully qualified name such as ''
>  > 
>  > A simple fix to parse_url() that recognizes a 'proxy' request
>  > and removes the leading slash allows ApacheBench to be used
>  > to test ANY proxy server.
>  You would need to be a bit more explicit about what you are talking 
>  about.  Posting diffs always helps.  How does fixing a "simple glitch
>  in parse_url" let you specify what proxy to use and what origin server
>  to talk to?

I thought it was pretty clear...
Just look at parse_url() in ab.c. It's a very short routine.

It is too dependent on the beginning 'slash'  being there
and doesn't know when to toss it away.

Currently if you want to test against SQUID running on port 3128 
and you give ApacheBench this command line...

ab -n 1000

Which means you want SQUID at to 
give you

ApacheBench sends '/' to the SQUID 
server with the slash on the front which results in an ERROR.

ApacheBench needs to 'see' the /http:// prefix and realize that
this request can only be targeted for a proxy server and remove
the leading slash.

Kevin Kiley
CTO, Remote
and/or <- RCTPDS Online document compression server home page

View raw message