Return-Path: Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 44833 invoked by uid 500); 11 Jan 2002 22:24:47 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 44821 invoked from network); 11 Jan 2002 22:24:45 -0000 Sender: gregames@Mail.MeepZor.Com Message-ID: <3C3F65E1.A8CF85BF@remulak.net> Date: Fri, 11 Jan 2002 17:23:29 -0500 From: Greg Ames X-Mailer: Mozilla 4.77 [en] (X11; U; Linux 2.4.3-20mdk i686) X-Accept-Language: en MIME-Version: 1.0 To: dev@httpd.apache.org Subject: Re: FreeBSD load statistics. References: <20020110070935.GH14870@ebuilt.com> <3C3D48E9.1030702@cnet.com> <20020110235837.GR14870@ebuilt.com> <3C3F2C55.6CFED3DD@remulak.net> <3C3F2F71.6040102@cnet.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N Brian Pane wrote: > > Greg Ames wrote: > > >>On Wed, Jan 09, 2002 at 11:55:21PM -0800, Brian Pane wrote: > >> > >>>My hypothesis upon seeing these numbers is that the high run queue length > >>>is due to us hitting some bottleneck in the kernel on daedalus--and because > >>>post-2.0.28 releases waste less time in usr-space code between syscalls, > >>>they hit that kernel bottleneck a lot harder than 2.0.28 did. > >>> > > > >An interesting theory. But it doesn't explain the high numbers of CPU ticks > >I've recorded for trivial requests, nor does it explain the top Brian B sent us > >with two httpd's both using over 50% of the CPU for a while > > > > It might explain the high number of CPU ticks for simple requests if > a lot of processes are hitting, say, a spin lock at the same time. But > the two httpds using 50% of the CPU are definitely something different. OK, spin locks sound like a reasonable explanation. I'm surprised that they spin that long though. I have 2_0_28 with cpu logging up now on port 8092. I ran log replay against it, and saw one simple request that used 221 cpu ticks (1.7 sec). Everything else was 20 ticks or less. So either this happens all the time and is a red herring, or this build has got the problem too. I suspect it's the former, but will put it into production later to be sure. > We really need some data on what's happening in those processes on daedalus. Yessir. I'd love it if there were something like ktrace that wrote to a big array in memory (i.e. not to a file) that we could grab if the run queue exceeded, say, 180. Greg