httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Querna <c...@force-elite.com>
Subject Re: Event MPM accept() handling
Date Wed, 01 Mar 2006 15:17:28 GMT
Greg Ames wrote:
> Saju Pillai wrote:
> 
>> I can understand why serializing apr_pollset_poll() & accept() for the 
>> listener threads doesn't make sense in the event-mpm.   A quick look
>> through the code leaves me confused about the following ...
>  >
>>  It looks like all the listener threads epoll() simultaenously on the 
>> listener sockets + their private set of sockets added to the pollset 
>> by workers. 
> 
> looks like you are correct.
> 
> originally there was a separate event thread for everything but new 
> connections and the listener thread's accept serialization was the same 
> as worker's.  then it seemed like a good idea to merge the listener and 
> event threads, and it only supported a single worker process briefly.  
> since there was only one merged listener/event thread in the whole 
> server there was nothing to serialize at that time.  then a few of us 
> grumbled about what happens if some 3rd party module seg faults or leaks 
> memory and we went back to multiple worker processes.
> 
>>   Will apr_pollset_poll() return "success" to each listener if a new 
>> connection arrives on a main listener socket ? If so won't each
>> listener attempt to accept() the new connection ?
> 
> I think so, but I'm not a fancy poll expert.  Paul?

Correct. This is on Purpose.  It actually turns out to be faster to call 
a nonblocking accept() and fail than it is to use the AcceptLock() that 
the other MPMs do. (Micro benchmarks I did back then seemed to show 
this, and just hammering a machine and comparing the results for Worker 
& Event MPMs seem to indicate this too).

> then the question is how bad is it?

Not that bad :)

This is traditionally called the 'Thundering Herd' Problem.

When you have N worker processes, and all N of them are awoken for an 
accept()'able new client. Unlike the prefork MPM, N is usually a smaller 
number in Event, because you don't need that many EventThreads Per 
Number of WorkerThreads,

I also reason that on a busy server, the place you most likely want to 
put the event mpm, you will have many more non-listener sockets to deal 
with, and those will fire more often than new clients are connecting, 
meaning you will already be coming out of the _poll() with 'real' 
events.  So the 'cost' of being put into the Run Queue isn't a 'waste', 
like it is on the Prefork MPM, where you just would go back into _poll() 
without having done anything.

-Paul


Mime
View raw message