httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rasmus Lerdorf <ras...@lerdorf.on.ca>
Subject Parallel request limits
Date Mon, 29 Jun 1998 03:42:57 GMT
I have seen a couple of benchmark tests now that showed a sudden drop-off
in performance as the number of concurrent clients were increased.

I figured it would be a good idea to try to track down the reason.  I
always thought it was something stupid like MaxClients being set too low.

There is one such comparison here:

  http://www.acme.com/software/thttpd/serverperf.gif

Information on how the test was done is here:

  http://www.acme.com/software/thttpd/benchmarks.html

I grabbed the http_load program from:

  http://www.acme.com/software/http_load/

A very simple test.  A small 1854 byte jpg file requested 1000 times from
localhost port 80. (The second number is the # of parallel requests to
perform - I had to hack http_load slightly to make it call setrlimit for
itself so it would have enough fd's to operate)

The important server conf settings are:
MinSpareServers 20
MaxSpareServers 40
StartServers 20
MaxClients 255

The server is a Sun Ultra-1 running Apache-1.3.0 under Solaris 2.5.1 and I
have 128 Megs of memory in the machine.

> http_load files 1000 1
1000 connections, 1854000 bytes, in 7 seconds
142.857 connections/sec, 264857 bytes/sec

> http_load files 1000 5
1000 connections, 1854000 bytes, in 7 seconds
142.857 connections/sec, 264857 bytes/sec

> http_load files 1000 10
1000 connections, 1854000 bytes, in 7 seconds
142.857 connections/sec, 264857 bytes/sec

> http_load files 1000 25
1000 connections, 1854000 bytes, in 7 seconds
142.857 connections/sec, 264857 bytes/sec

> http_load files 1000 50
1000 connections, 1854000 bytes, in 6 seconds
166.667 connections/sec, 309000 bytes/sec

> http_load files 1000 75
1000 connections, 1854000 bytes, in 43 seconds   <---
23.2558 connections/sec, 43116.3 bytes/sec

> http_load files 1000 100
1000 connections, 1854000 bytes, in 127 seconds  <---
7.87402 connections/sec, 14598.4 bytes/sec


Ouch!  Major dropoff here!  I can tell from watching the access_log that
there are long delays.  It'll do a bunch of requests and then just sit
there for many seconds whereas when the number of parallel requests is at
50 or less it just streams along.

Ok, so I doubled the settings so that they were:

MinSpareServers 40
MaxSpareServers 80
StartServers 40

(kept MaxClients at 255)

Running the 75 and 100  cases again shows a big improvement:

> http_load files 1000 75
1000 connections, 1854000 bytes, in 6 seconds
166.667 connections/sec, 309000 bytes/sec

> http_load files 1000 100
1000 connections, 1854000 bytes, in 7 seconds
142.857 connections/sec, 264857 bytes/sec

Ok, to try to better define which setting is causing this I dropped the
MaxSpareServers back down to 40 and now have:

MinSpareServers 40
MaxSpareServers 40
StartServers 40

> http_load files 1000 50
1000 connections, 1854000 bytes, in 6 seconds
166.667 connections/sec, 309000 bytes/sec

> http_load files 1000 100
1000 connections, 1854000 bytes, in 67 seconds
14.9254 connections/sec, 27671.6 bytes/sec

So, it is happy at 50 parallel requests, but unhappy at 100 with this
MaxSpareServers setting.

Ok, so I bumbed my MaxSpareServers to 255 to match my MaxClients setting
to see how high I can go:

> http_load files 1000 100
1000 connections, 1854000 bytes, in 6 seconds
166.667 connections/sec, 309000 bytes/sec

> http_load files 1000 125
1000 connections, 1854000 bytes, in 8 seconds
125 connections/sec, 231750 bytes/sec

> http_load files 1000 150
1000 connections, 1854000 bytes, in 7 seconds
142.857 connections/sec, 264857 bytes/sec

> http_load files 1000 175
1000 connections, 1854000 bytes, in 8 seconds
125 connections/sec, 231750 bytes/sec

> http_load files 1000 200
1000 connections, 1854000 bytes, in 79 seconds
12.6582 connections/sec, 23468.4 bytes/sec

Going above 200 I started to get "no more processes" error messages.  

sysdef gives me the following:

*
* Tunable Parameters
*
 2596864        maximum memory allowed in buffer cache (bufhwm)
    1978        maximum number of processes (v.v_proc)
      99        maximum global priority in sys class (MAXCLSYSPRI)
    1973        maximum processes per user id (v.v_maxup)
      30        auto update time limit in seconds (NAUTOUP)
      25        page stealing low water mark (GPGSLO)
       5        fsflush run rate (FSFLUSHR)
      25        minimum resident memory for avoiding deadlock (MINARMEM)
      25        minimum swapable memory for avoiding deadlock (MINASMEM)
*
* Utsname Tunables
*
   5.5.1  release (REL)
     asf  node name (NODE)
   SunOS  system name (SYS)
Generic_103640-08  version (VER)
*
* Process Resource Limit Tunables (Current:Maximum)
*
Infinity:Infinity       cpu time
Infinity:Infinity       file size
7ffff000:Infinity       heap size
  800000:7ffff000       stack size
Infinity:Infinity       core file size
      40:     400       file descriptors
Infinity:Infinity       mapped memory

So, I probably need to up some of my limits here if I want to test a
higher number of parallel requests with this http_load program.  I am
pretty sure this is what these various benchmarkers are running into.  We
really should put together some sort of performance/benchmark tuning page
that explains some of these issues a bit better.

-Rasmus


Mime
View raw message