httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <jerenkra...@ebuilt.com>
Subject FreeBSD load statistics.
Date Thu, 10 Jan 2002 07:09:35 GMT
Yes, I'm seeing more system calls with post-2.0.28 builds.  But,
I can't really say that they are hurting anything.  Remember that
we did a lot of optimization to take bottlenecks out.  I'm wondering
if we've just shoved the optimization requirements from us to 
FreeBSD.  If so, that's exactly what we should do.  =)

I ran a flood timed test with 5 clients for 60 seconds and captured
a vmstat -w 5 output for both 2.0.28 and HEAD.  The run averages are 
higher with post-2.0.28.  But, the req/sec is noticeably higher (no 
other config differences).  So, I'm not convinced that these higher 
syscalls are doing us any harm.

The syscalls are only recorded over a 5 second interval, so -w 5
looks like the best use of vmstat.

2.0.28's vmstat -w 5:
 procs      memory      page                    disks     faults      cpu
 r b w     avm    fre  flt  re  pi  po  fr  sr ad0 ac0   in   sy  cs us sy id
 0 1 0   10024 359888   74   0   0   0  74   0   0   0  799  341 1439  1  6 93
 0 1 0   10400 359880    1   0   0   0   0   0   0   0  331   10 220  0  0 100
 1 1 0   22128 359880    1   0   0   0   0   0   0   0 5368 3012 13832  7 46 47
 4 1 0   22128 359872    1   0   0   0   0   0   0   0 9875 5512 26003 14 82  4
 4 1 0   21480 359676   10   0   0   0   1   0   1   0 10094 5659 26677 14 85  2
 4 1 0   21104 359676    1   0   0   0   0   0   0   0 10127 5519 26706 14 85  2
 4 1 0   21104 359676    1   0   0   0   1   0   1   0 10081 5607 26605 12 85  3
 2 1 0   21884 359656   26   0   0   0  24   0   0   0 9939 5631 26270 11 87  2
 3 1 0   21884 359620    3   0   0   0   0   0   0   0 9832 5429 25915 15 80  5
 1 2 0   21884 359604    2   0   0   0   0   0   0   0 9907 5522 26104 14 81  5
 4 1 0   21884 359596    1   0   0   0   0   0   0   0 10098 5683 26756 12 85  3
 2 1 0   21104 359596    1   0   0   0   0   0   0   0 10016 5656 26445 12 84  4
 3 1 0   21104 359592    1   0   0   0   0   0   0   0 10012 5679 26451 12 83  5
 2 1 0   21480 359580    1   0   0   0   1   0   2   0 10058 5612 26533 15 83  2
 0 1 0   20308 359788    4   0   0   0  13   0   0   0 4518 2401 11577  5 38 56
(vmstat went to idle after this...the test was completed by this time)

2.0.28's flood report:
Slowest pages on average (worst 5):
   Average times (sec)
connect write   read    close   hits    URL
0.0040  0.0041  0.0376  0.0376  2232
http://walla.apl.ebuilt.net:7888/manual/new_features_2_0.html
0.0000  0.0001  0.0367  0.0367  2232
http://walla.apl.ebuilt.net:7888/manual/mod/core.html
0.0000  0.0001  0.0175  0.0175  2232
http://walla.apl.ebuilt.net:7888/manual/logs.html
0.0000  0.0001  0.0162  0.0162  2232
http://walla.apl.ebuilt.net:7888/manual/content-negotiation.html
0.0000  0.0001  0.0142  0.0142  2232
http://walla.apl.ebuilt.net:7888/manual/mod/directives.html
Requests: 13392 Time: 59.66 Req/Sec: 224.47

2.0.31-dev's vmstat -w 5:
 procs      memory      page                    disks     faults      cpu
 r b w     avm    fre  flt  re  pi  po  fr  sr ad0 ac0   in   sy  cs us sy id
 0 1 0   20188 361680   74   0   0   0  74   0   0   0  812  349 1476  1  7 92
 0 1 0   20188 361668    1   0   0   0   0   0   0   0  330    8 219  0  1 99
 4 1 0   24016 360416   58   0   0   0   1   0   1   0 10026 6016 26651 13 70 18
 0 1 0   24192 360208   10   0   0   0   0   0   0   0 11389 6699 30793 15 82  3
 4 1 0   22372 360208    1   0   0   0   0   0   0   0 11390 6700 30858 13 84  3
 2 1 0   23376 360000   10   0   0   0   0   0   0   0 11225 6728 30327 15 80  5
 3 1 0   23376 360000    1   0   0   0   1   0   2   0 10989 6497 29713 16 79  5
 6 1 0   23376 360000    1   0   0   0   0   0   0   0 11457 6569 31064 14 81  4
 3 1 0   23376 360000    1   0   0   0   0   0   0   0 11458 6693 31039 14 83  2
 5 1 0   23376 360000    1   0   0   0   0   0   0   0 11456 6556 30984 16 81  3
 2 1 0   23780 360000    1   0   0   0   0   0   0   0 11391 6598 30844 16 81  3
 2 1 0   23780 360000    1   0   0   0   0   0   0   0 11293 6785 30601 16 81  3
 1 1 0   23780 360000    1   0   0   0   1   0   1   0 11141 6530 30141 16 80  4
 3 1 0   23780 360000    1   0   0   0   0   0   0   0 11272 6726 30450 13 82  5
 0 1 0   22372 360168    5   0   0   0  12   0   0   0  765  272 1400  0  4 96
 0 1 0   22372 360168    1   0   0   0   0   0   1   0  330    8 219  0  1 99

2.0.31-dev (HEAD) flood output:
Slowest pages on average (worst 5):
   Average times (sec)
connect write   read    close   hits    URL
0.0000  0.0001  0.0386  0.0386  2619
http://walla.apl.ebuilt.net:7888/manual/mod/core.html
0.0081  0.0081  0.0278  0.0278  2619
http://walla.apl.ebuilt.net:7888/manual/new_features_2_0.html
0.0000  0.0001  0.0134  0.0134  2619
http://walla.apl.ebuilt.net:7888/manual/content-negotiation.html
0.0000  0.0001  0.0131  0.0131  2619
http://walla.apl.ebuilt.net:7888/manual/logs.html
0.0000  0.0001  0.0120  0.0120  2619
http://walla.apl.ebuilt.net:7888/manual/mod/directives.html
Requests: 15714 Time: 59.57 Req/Sec: 263.82

The number I like is: 224.47 (2.0.28) vs. 263.82 (HEAD) for about
a ~17% increase in req/sec.  The 5 second syscall rate is 5679 vs. 
6785 (~19% increase).  17% increase in rps yields 19% increase in 
syscalls.  I'd say that's about a statistical wash.

As shown by the vmstat, there are definitely more processes actively
running with HEAD than with 2.0.28.  Is that necessarily wrong?  Can
someone make me believe that a higher run-queue is a bad thing?  I 
can't see anything detrimental here, and the statistic I care most 
about (rps) shows an increase in recent builds.  Please show me
the error of my ways.  I want to learn.  =)  -- justin


Mime
View raw message