httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <brian.p...@cnet.com>
Subject Re: FreeBSD load statistics.
Date Thu, 10 Jan 2002 07:55:21 GMT
Justin Erenkrantz wrote:

>Yes, I'm seeing more system calls with post-2.0.28 builds.  But,
>I can't really say that they are hurting anything.  Remember that
>we did a lot of optimization to take bottlenecks out.  I'm wondering
>if we've just shoved the optimization requirements from us to 
>FreeBSD.  If so, that's exactly what we should do.  =)
>
...

>The number I like is: 224.47 (2.0.28) vs. 263.82 (HEAD) for about
>a ~17% increase in req/sec.  The 5 second syscall rate is 5679 vs. 
>6785 (~19% increase).  17% increase in rps yields 19% increase in 
>syscalls.  I'd say that's about a statistical wash.
>

I agree with your assessment.  These results don't look bad at all;
the increase in syscalls/second proportional to the increase in
throughput just means that we're getting better utilization of the
hardware.

>As shown by the vmstat, there are definitely more processes actively
>running with HEAD than with 2.0.28.  Is that necessarily wrong?  Can
>someone make me believe that a higher run-queue is a bad thing?  I 
>can't see anything detrimental here, and the statistic I care most 
>about (rps) shows an increase in recent builds.  Please show me
>the error of my ways.  I want to learn.  =)  -- justin
>

In the case of your test data, I agree.  In your vmstat output, there's
an increase in run queue length as the CPU utilization approaches 100%.
No problem there at all.

Daedalus, in contrast, looks really screwed up.  Here's an interesting bit
of the vmstat output that Greg captured from 2.0.30 on daedalus, from the
trace file at http://www.apache.org/~gregames/vmstat:

 procs      memory      page                    disks     faults      cpu
 r b w     avm    fre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us 
sy id
237 3 0  250776 115292   94   0   0   0  60   0  27   7  888 1853 2512  
2  7 92

Two hundred and thirty-seven runnable procs, with a nearly idle CPU.  Wow.
So the load spike on daedalus wasn't the same phenomenon that occurred
in your tests.  Even after the switch back to 2.0.28, there were very
high run queue lengths with a near-idle CPU:

 procs      memory      page                    disks     faults      cpu
 r b w     avm    fre  flt  re  pi  po  fr  sr da0 da1   in   sy  cs us 
sy id
64 3 0  256552  42276  509   0   0   0 333   0   5   6  842 4154 1515 
11  8 81

My hypothesis upon seeing these numbers is that the high run queue length
is due to us hitting some bottleneck in the kernel on daedalus--and because
post-2.0.28 releases waste less time in usr-space code between syscalls,
they hit that kernel bottleneck a lot harder than 2.0.28 did.

What do you think?

--Brian



Mime
View raw message