Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 62160 invoked from network); 26 Jan 2004 18:09:46 -0000 Received: from daedalus.apache.org (HELO mail.apache.org) (208.185.179.12) by minotaur-2.apache.org with SMTP; 26 Jan 2004 18:09:46 -0000 Received: (qmail 12668 invoked by uid 500); 26 Jan 2004 18:09:23 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 12636 invoked by uid 500); 26 Jan 2004 18:09:22 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: list-post: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 12578 invoked from network); 26 Jan 2004 18:09:22 -0000 Received: from unknown (HELO mailhost.clove.org) (209.237.225.73) by daedalus.apache.org with SMTP; 26 Jan 2004 18:09:22 -0000 Received: (qmail 25814 invoked from network); 26 Jan 2004 18:09:26 -0000 Received: from dsl017-044-108.sfo4.dsl.speakeasy.net (HELO gandalf.clove.org) (aaron@69.17.44.108) by grover.clove.org with SMTP; 26 Jan 2004 18:09:26 -0000 Received: by gandalf.clove.org (Postfix, from userid 501) id 2AEF82D68A2; Mon, 26 Jan 2004 10:09:20 -0800 (PST) Date: Mon, 26 Jan 2004 10:09:20 -0800 From: Aaron Bannert To: dev@httpd.apache.org, colm@stdlib.net Subject: Re: [PATCH] raise MAX_SERVER_LIMIT Message-ID: <20040126180919.GB5094@clove.org> Mail-Followup-To: Aaron Bannert , dev@httpd.apache.org, colm@stdlib.net References: <20040107213113.GA3595@castlerea.stdlib.net.> <4006B697.80707@apache.org> <20040115160438.GA19015@castlerea.stdlib.net.> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040115160438.GA19015@castlerea.stdlib.net.> User-Agent: Mutt/1.5.4i X-Spam-Rating: daedalus.apache.org 1.6.2 0/1000/N X-Spam-Rating: minotaur-2.apache.org 1.6.2 0/1000/N On Thu, Jan 15, 2004 at 04:04:38PM +0000, Colm MacCarthaigh wrote: > There were other changes co-incidental to that, like going to 12Gb > of RAM, which certainly helped, so it's hard to narrow it down too > much. Ok with 18,000 or so child processes (all in the run queue) what does your load look like? Also, what kind of memory footprint are you seeing? > I don't use worker because it still dumps an un-backtracable corefile > within about 5 minutes for me. I still have no idea why, though plenty > of corefiles. I havn't tried a serious analysis yet, becasue I've been > moving house, but I hope to get to it soon. Moving to worker would be > a good thing :) I'd love to find out what's causing your worker failures. Are you using any thread-unsafe modules or libraries? -aaron