httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <>
Subject Re: Single listener - multi-worker
Date Sat, 28 Jul 2001 22:18:56 GMT
On Sat, Jul 28, 2001 at 02:55:50PM -0400, Greg Ames wrote:
> This problem occurs when incoming connections dry up, mostly in
> non-production environments (therefore it's not real high on my priority
> list, but whatever).  If worker processes aren't exiting quickly enough,
> before starting with the SIGTERMs, check the scoreboard to see how many
> processes don't have the new generation number (restarts only), pid ==
> 0, quiescing, or a "G" somewhere in the worker_scores.  Send that many
> dummy connects.  But don't bother with this if the dummy connect sender
> ever times out, because we're probably overloading the kernel's
> connection queue.

You have no way of guaranteeing that your targeted threads (from the
old generation) will receive the requests.  

Oh, maybe you want to do this before spawning the new processes in the
case of a restart?  That's an option.  I think it's better to go to a 
single listener/process model, but that's me.  What about when you hit 

In that case, you'll have a bunch of threads in your processes waiting 
for a connection in accept (say we have S_L_U_A).  At that point, 
multiple processes may be active, so you can never guarantee that N 
dummy connects will ever work (imagine a FIFO accept queue spread 
across processes - some may have w_m_e set - others may not, so 
they'll jump back on the queue).

With rbb's proposed patch, this all just works - the thread that
received the POD sets w_m_e, releases the intra-process lock, and 
all *idle* threads in that process immediately acquire that lock 
and see w_m_e and exit before attempting to get the cross-process 
accept lock (if needed).  

With MRPC logic, you probably need to decrement MRPC *before* you
actually call process_connection and before the intra-process
listener lock is released.  But, I'd have to think more about this
some more.  -- justin

View raw message