httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rose, Billy" <>
Subject RE: [PATCH] convert worker MPM to leader/followers design
Date Wed, 10 Apr 2002 13:44:22 GMT
How about this: the listener has a FIFO for queuing incoming requests (size
set via .conf file, some default, or perhaps memory available). A "Master"
thread maintains a linked list of all children and threads therein. The
Master sleeps waiting on a signal from the Listener. The Listener simply
runs in a loop waiting on connections, grabbing it, stuffing it in the FIFO,
and waking the Master. Once awakened by the Listener, the Master dispatches
the request to the thread at the head of the FIFO (the next available
worker) and marks that threads execution as "busy" in the linked list, and
finally incrementing a node pointer to point to the new head of the FIFO.
The Master then checks for more entries in the FIFO and sleeps if none are
available. This has 2 major ramifications I immediately see: 1) depending on
the queue size, _huge_ spikes would be handled gracefully; 2) the scoreboard
becomes a linked list maintained as part of the request/response cycle. This
model should also work in other mpm's. In the prefork model, the word
"thread" above is simply replaced with "process".

Billy Rose

> -----Original Message-----
> From: Brian Pane []
> Sent: Wednesday, April 10, 2002 12:51 AM
> To:
> Subject: [PATCH] convert worker MPM to leader/followers design
> Based on the "slow Apache 2.0" thread earlier today,
> and my observation therein that it's possible for a
> worker child process to block on a full file descriptor
> queue (all threads busy) while other child procs have
> idle threads, I decided to revive the idea of switching
> the worker thread management to a leader/followers
> pattern.
> The way it works is:
>   * There's no dedicated listener thread.  The workers
>     take turns serving as the listener.
>   * Idle threads are listed in a stack.  Each thread has
>     a condition variable.  When the current listener
>     accepts a connection, it pops the next idle thread
>     from the stack and wakes it up using the condition
>     variable.  The newly awakened thread becomes the
>     new listener.
>   * If there is no idle thread available to become
>     the new listener, the next thread to finish handling
>     its current connection takes over as listener.
>     (Thus a process that's already saturated with
>     connections won't call accept() until it actually
>     has an idle thread available.)
> In order to implement the patch quickly, I've used a
> mutex to guard the stack for now, rather than using
> atomic compare-and-swap operations like I'd once
> proposed.  In order to improve scalability, though,
> this mutex is *not* used for the condition variable
> signaling.  Instead, each worker thread has a private
> mutex for use with its condition variable.  This
> thread-private mutex is locked at thread creation,
> and the only subsequent operations on it are those
> done implicitly by the cond_signal/cond_wait.  Thus
> only the thread associated with that mutex ever locks
> or unlocks it, which should help to reduce synchronization
> overhead.  (The design is dependent on the semantics
> of the one-listener-at-a-time model to synchronize
> the cond_signal with the cond_wait.)
> Can I get a few volunteers to test/review this?
> Thanks,
> --Brian

View raw message