httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sylvain Beaux" <sylvain.be...@gmail.com>
Subject [users@httpd] Apache benchmark in reverse proxy scenario for large Web apps
Date Mon, 21 Jul 2008 15:20:34 GMT
Hi gays,

I'm designing a Apache as a reverse proxy using a home made perl
script for cookie support.

I needs some advises to design Apache as a reverse proxy for many web
apps(~ 20 servers)

There will be 500 users using this system 24/7.
Each users will have 2 permanents HTTPS connections using chunk encoding and
1 connection using HTTPS keep alive mechanism.

Finally Apache will have to process 1000 simultaneous permanent
HTTPS connections and 500 HTTPS connections.

An other point is that Apache will have to rewrite the HTTP Location
header to support HTTP 302 redirection and authenticate users using a
perl module to provide a BASIC + cookie authentication.

I mades some tests with the Apache ab product on apache v2.2.9
my harware configuration is :

# cat /proc/cpuinfo
processor       : 0
model name      : Intel(R) Pentium(R) 4 CPU 2.80GHz
processor       : 1
model name      : Intel(R) Pentium(R) 4 CPU 2.80GHz

# top
Mem:   2067440k total,   691124k used,  1376316k free,   190704k buffers

# apachectl -V
Server version: Apache/2.2.9 (Unix)
Server built:   Jul  8 2008 16:17:39
Server's Module Magic Number: 20051115:15
Server loaded:  APR 1.2.7, APR-Util 1.2.7
Compiled using: APR 1.2.7, APR-Util 1.2.7
Architecture:   32-bit
Server MPM:     Prefork
  threaded:     no
    forked:     yes (variable process count)
Server compiled with....
 -D APACHE_MPM_DIR="server/mpm/prefork"
 -D APR_HAS_SENDFILE
 -D APR_HAS_MMAP
 -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
 -D APR_USE_SYSVSEM_SERIALIZE
 -D APR_USE_PTHREAD_SERIALIZE
 -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
 -D APR_HAS_OTHER_CHILD
 -D AP_HAVE_RELIABLE_PIPED_LOGS
 -D DYNAMIC_MODULE_LIMIT=128

-----------------httpd.conf-----------------
<IfModule prefork.c>
        MinSpareServers   50
        MaxSpareServers   50
        StartServers      50
        ServerLimit       10000
        MaxClients        500
        MaxRequestsPerChild 5000
</IfModule>
----------------END ----------------------

I choose MaxClients=500 because it's an estimation of the child
process memory usage.

there is the calculation :
                    Total_RAM_free
  MaxClients = ---------------------------------------
                   Max_Process_Size - Shared_RAM_per_Child

and i get this values with top on a worker child process
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  DATA COMMAND
 9345 daemon    16   0 27540 9868 3660 R    5  0.5   0:00.19 6084 httpd

Shared memory per process = ~ 3,5Mb (3660)
Memory per Process = 6,084 Mb

so I calculate Max Client = 1300 Mb / (6,084 - 3660) = 540 processes,
I prefered gaving some memory margin in case of ...

Do you agree with this result ?

I tests 500 simultaneous connections with a total of 10 000 requests.
I aims to test Apache in a
heavy load and simulates 500 permanent connections but the result was
not very good :

Document Path:          /uc/favicon.ico
Document Length:        1022 bytes

Concurrency Level:      500
Time taken for tests:   549.188 seconds
Complete requests:      10000
Failed requests:        264
   (Connect: 0, Receive: 0, Length: 264, Exceptions: 0)
Write errors:           0
Non-2xx responses:      2
Total transferred:      13543827 bytes
HTML transferred:       9951006 bytes
Requests per second:    18.21 [#/sec] (mean)
Time per request:       27459.375 [ms] (mean)
Time per request:       54.919 [ms] (mean, across all concurrent requests)
Transfer rate:          24.08 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       63 10873 13533.9   9016  130391
Processing:    47 15784 28431.9   9828  321688
Waiting:        0 12616 27897.6   6516  318703
Total:        469 26657 34939.1  19484  341219

Percentage of the requests served within a certain time (ms)
  50%  19484
  66%  22094
  75%  24922
  80%  28547
  90%  45891
  95%  82531
  98%  175422
  99%  219375
 100%  341219 (longest request)

So I realize that the average time per request was very high !
Time per request:       27459.375 [ms]

Do I need a better harware ?? or is my configuration not optimal ? (I
used this doc :
http://perl.apache.org/docs/1.0/guide/performance.html#Performance_Tuning_by_Tweaking_Apache_Configuration)

Ie : to support fully 1500 processes : 1500* (6,084 - 3660) = 3600 Mb RAM

And a last question, how do apache process chunk encoding requests ?
is it allowing 1 process per connection ? so for 1000 simultaneous
requests it will require 1000 processes ?


thanks a lot :)

--
Sylvain Beaux

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Mime
View raw message