Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 31068 invoked from network); 1 Mar 2006 15:17:08 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 1 Mar 2006 15:17:08 -0000 Received: (qmail 66544 invoked by uid 500); 1 Mar 2006 15:17:50 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 66251 invoked by uid 500); 1 Mar 2006 15:17:49 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 66239 invoked by uid 99); 1 Mar 2006 15:17:49 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Mar 2006 07:17:49 -0800 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of chip@force-elite.com designates 72.232.80.58 as permitted sender) Received: from [72.232.80.58] (HELO constant.northnitch.com) (72.232.80.58) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Mar 2006 07:17:48 -0800 Received: from localhost (localhost.layeredtech.com [127.0.0.1]) by constant.northnitch.com (Postfix) with ESMTP id B51FE5C73 for ; Wed, 1 Mar 2006 09:17:36 -0600 (CST) Received: from constant.northnitch.com ([127.0.0.1]) by localhost (constant.northnitch.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 12959-07 for ; Wed, 1 Mar 2006 09:17:36 -0600 (CST) Received: from [192.168.1.102] (c-67-169-29-182.hsd1.ca.comcast.net [67.169.29.182]) by constant.northnitch.com (Postfix) with ESMTP id 2CBC35C5F for ; Wed, 1 Mar 2006 09:17:36 -0600 (CST) Message-ID: <4405BB08.8010205@force-elite.com> Date: Wed, 01 Mar 2006 07:17:28 -0800 From: Paul Querna User-Agent: Thunderbird 1.5 (Windows/20051201) MIME-Version: 1.0 To: dev@httpd.apache.org Subject: Re: Event MPM accept() handling References: <13c255070603010245t3180de61t@mail.gmail.com> <4405B5EA.5050303@apache.org> In-Reply-To: <4405B5EA.5050303@apache.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Scanned: by amavisd-new at constant.northnitch.com X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N Greg Ames wrote: > Saju Pillai wrote: > >> I can understand why serializing apr_pollset_poll() & accept() for the >> listener threads doesn't make sense in the event-mpm. A quick look >> through the code leaves me confused about the following ... > > >> It looks like all the listener threads epoll() simultaenously on the >> listener sockets + their private set of sockets added to the pollset >> by workers. > > looks like you are correct. > > originally there was a separate event thread for everything but new > connections and the listener thread's accept serialization was the same > as worker's. then it seemed like a good idea to merge the listener and > event threads, and it only supported a single worker process briefly. > since there was only one merged listener/event thread in the whole > server there was nothing to serialize at that time. then a few of us > grumbled about what happens if some 3rd party module seg faults or leaks > memory and we went back to multiple worker processes. > >> Will apr_pollset_poll() return "success" to each listener if a new >> connection arrives on a main listener socket ? If so won't each >> listener attempt to accept() the new connection ? > > I think so, but I'm not a fancy poll expert. Paul? Correct. This is on Purpose. It actually turns out to be faster to call a nonblocking accept() and fail than it is to use the AcceptLock() that the other MPMs do. (Micro benchmarks I did back then seemed to show this, and just hammering a machine and comparing the results for Worker & Event MPMs seem to indicate this too). > then the question is how bad is it? Not that bad :) This is traditionally called the 'Thundering Herd' Problem. When you have N worker processes, and all N of them are awoken for an accept()'able new client. Unlike the prefork MPM, N is usually a smaller number in Event, because you don't need that many EventThreads Per Number of WorkerThreads, I also reason that on a busy server, the place you most likely want to put the event mpm, you will have many more non-listener sockets to deal with, and those will fire more often than new clients are connecting, meaning you will already be coming out of the _poll() with 'real' events. So the 'cost' of being put into the Run Queue isn't a 'waste', like it is on the Prefork MPM, where you just would go back into _poll() without having done anything. -Paul