httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Behlendorf <br...@organic.com>
Subject rebuttal to spyglass server efficiency claims
Date Sat, 02 Mar 1996 05:11:26 GMT
This is from snews://secnews.netscape.com/netscape.devs-announce:

Don Hackler wrote:
> 
> Brian Vowell wrote:
> >
> > My original question that I started this thread with still hasn't been
> > answered-- when are we going to see some benchmarks?
> >
> > Has anyone tried the Spyglass server?  Any remarks?
> 
> Hello Brian...
> 
> It took a while but I was able to get an official answer to your
> question about the Spyglass server:
> 
> This was "textified" from the HTML version posted internally at Netscape.
> 
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> 
> Rebuttal to performance claims by Spyglass
> 
>                                  Michael Blakeley
>                              Netscape Communications
>                            Last Modified 12 February 1996
> 
> Summary
> 
> A recent publication by Spyglass claimed performance 3-10 times that of other
> Web servers. Netscape has examined their claims, and found that:
> 
>   Spyglass modified the WebStone benchmark to support HTTP/1.1 Keep-Alive
> transactions, which improve the efficiency of Web servers by retrieving
> multiple documents with a single connection.
>   The source code for these changes was not immediately made public. After
> we requested the changed WebStone source code, Spyglass released it.
>   The Spyglass's implementation of Keep-Alives in WebStone does not reflect
> real-world performance of Keep-Alives. In the real world, Keep-Alive
> transactions will recycle a connection 0-10 times. In the Spyglass benchmark,
> we estimate that each connection was recycled 500-16,000 times.
>   Spyglass tested only those competitive servers that do not support Keep-Alives.
>   Spyglass has not disclosed any performance tuning of the servers in the test.
> Performance tuning alone could account for the differences observed.
>   Spyglass did not report the client error rates generated during their tests.
> 
> Our conclusion is that the Spyglass numbers are probably the result of a
> self-serving benchmark methodology.
> 
> The original Spyglass document is available at
> http://www.spyglass.com/products/server_download.html
> and
> http://www.spyglass.com/products/server_results.html
> 
> Details
> 
> In summary, Spyglass claims superior average response time and throughput
> (both connections per second and Mbit/sec) versus NCSA, CERN, Apache, and
> "Brand X 1.1" (probably an allusion to Netscape 1.1).
> 
> Spyglass tested each Web server software platform on a Sparcstation 20
> model 71 with 64MB of RAM and Sun's quad-Ethernet Sbus interface.
> 
> Three 50MHz MicroSparc-II Sun workstations (Classics and LXs) were attached,
> each on a private Ethernet, as shown below. The clients and the server ran
> Solaris 2.5. The machines did not run DNS, NIS, or NFS.
> 
> [ image missing... ]
> 
> In their report, Spyglass writes that this configuration eliminates collisions.
> This isn't quite true - collisions still occur whenever both workstations on a
> segment attempt to simultaneously transmit a packet. The maximum realistic
> throughput on each segment is 6-8 Mbit/sec, for a total of 24-32 Mbit/sec
> bandwidth.
> 
> Spyglass measured the performance of the WWW server software at simulated
> user loads of 30, 60, 90, 120, 150, 180, and 210 "users" (presumably, they
> mean WebStone processes). Each test ran for 30 minutes, and three iterations
> were performed.
> 
> So far, the picture is clear. But Spyglass omitted much of the information
> that should be detailed in any report of WebStone results.
> 
>   Which workload did they use?
>   How were the servers tuned?
>   How was the operating system tuned?
>   Where can we examine their raw data?
> 
> Without answers to these questions, it is impossible to replicate their tests.
> Spyglass also did not release error rates for the servers they tested.
> WebStone records errors whenever a "Connection Refused" dialog would appear
> to a Web browser. This poses an interesting question for server evaluations:
> would you rather have a very fast server, and some percentage of users
> receiving "Connection Refused" messages, or a slower server that accepts all
> connections? Since Spyglass did not provide error rates, we don't know if
> their server generated errors, or not.
> 
> Spyglass used version 1.1 of the WebStone benchmark (available at
> <http://www.sgi.com/Products/WebFORCE/WebStone/>). However, Spyglass
> modified the source code to:
> 
>   fix bugs
>   add support for Keep-Alive transactions, as specified by the draft
> HTTP 1.1 standard.
> 
> Spyglass supports these Keep-Alive transactions. The other servers in their
> test did not. This doesn't invalidate their test, but does throw a light on
> their motivations. How did Spyglass implement Keep-Alives?  Spyglass did not
> at first provide source code for their modifications. After inquiries, the
> source code was eventually released.
> 
> Upon examining the source code, we found that Spyglass's implementation
> of Keep-Alives maintained the same connection as long as the test lasted.
> So, literally thousands of HTTP transactions occured over each TCP connection.
> This single element really invalidates Spyglass's entire methodology.
> 
> Since WebStone is capable of loading a server with several hundred
> HTTP 1.0 operations per second, the normal bottleneck for an HTTP/1.0
> server is its ability to quickly create and dispose of HTTP connections.
> When connection are re-used (Keep-Alive transactions), this bottleneck goes
> away, and hundreds of "virtual" connections per second potentially travel
> through each real TCP connection.
> 
> But that's not the way Keep-Alives work in the real world. In the real
> world, a Keep-Alive connection will most likely last for 1-10 transactions,
> not 500-16,000 times.
> 
> When modifying an existing benchmark, it's important to think about the
> design of the benchmark, and its suitability to the task at hand. WebStone
> was designed for HTTP/1.0 transaction, without any Keep-Alives.  Simply
> adding Keep-Alives is not enough - we must analyze how many times each
> connection should be re-used, if real-world performance is to be approximated.
> A realistic WebStone implentation of Keep-Alives would re-use each connection
> some small number of times - perhaps a weighted distribution from 1-10
> times would be satisfactory.
> 
> In conclusion, Spyglass's modifications to the WebStone benchmark overstated
> the importance of Keep-Alives by a factor of at least 500.
> 
> - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
> 
> End of attachment.
> 
> --
> ---------------------------------------------------------------------
> Don Hackler
> donh@netscape.com

-- 
--=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--
brian@organic.com  brian@hyperreal.com  http://www.[hyperreal,organic].com/

Mime
View raw message