httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bill Stoddard <b...@wstoddard.com>
Subject Re: [PATCH] lingering close thread for worker
Date Thu, 26 Aug 2004 19:05:18 GMT
Joe Schaefer wrote:

> Joe Schaefer <joe+gmane@sunstarsys.com> writes:
> 
> 
>>The concept of multiplexing apache's lingering 
>>close comes from lingerd, but I thought it'd be 
>>interesting to try the same thing for worker with 
>>a dedicated closer thread.
> 
> 
> The patch is intended to improve worker's scaling
> characteristics without adversely affecting per-request
> latency.  I don't have a good testbed for checking this
> out, but I've run a few microbenchmarks with ab (on
> the same host the server is running on) to see what 
> happens when the server is overdriven by lots of 
> concurrent requests.
> 
> Setup: standard installation w/ worker's config
> reduced to
> 
> <IfModule worker.c>
>     StartServers          1
>     MaxClients            5
>     MinSpareThreads       1
>     MaxSpareThreads       5
>     ThreadsPerChild       5
>     MaxRequestsPerChild   0
> </IfModule>
> 
> ie 1 server w/ 5 threads.  The closer_thread's 
> queue/pollset size are capped at 100 with this config.
> 
> Running ab -n 10000 -c $concurrency http://localhost/
> 
> concurency          requests/sec
>              unpatched          with patch (CLOSER_DEBUG undefined)
>    5           2995               2923
>   10           2999               2990
>   20           2991               2935
>   50           2975               2896
>  100           2715               2853
>  200           2530               2659
>  500           1871               2353
>  600           1014               2316
>  700            547               2094
>  800            450               2021
>  900            428               2042
> 1000            230               2000
> 

I'd like to see if others can replicate these results.  This is sort of the behaviour I expected;
patched 
server slower at low concurrency rates because the server is doing more queuing work for little
benefit. I 
also expected the cross over in performance as the concurrency increased, but I am -really-
suprised at the 
magnitude of the difference beginning around 500 concurrent clients!! I almost wonder if a
large number of 
requests are actually failing in the patched case under high load...

Bill

Mime
View raw message