httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Querna <p...@querna.org>
Subject Re: timeout queues in event mpm
Date Mon, 14 Nov 2011 16:12:06 GMT
On Mon, Nov 14, 2011 at 7:47 AM, Greg Ames <ames.greg@gmail.com> wrote:
>
>
> On Fri, Nov 11, 2011 at 11:07 PM, Paul Querna <paul@querna.org> wrote:
>>
>> 4) Have the single Event thread de-queue operations from all the worker
>> threads.
>
>
> Since the operations include Add and Remove, are you saying we would have to
> have to wait for a context switch to the listener thread before
> apr_pollset_add() and apr_pollset_remove() or their replacements complete?
> The _add()s execute immediately on the worker threads now; I believe the
> _remove()s do too... How would you avoid adding latency to the Add
> operations?  It probably doesn't matter so much for Remove because we are
> done with the connection.

The problem became that in trunk, we had to told the lock for the
timeout queues while we were doing the pollset operation.   The
pollset already had its own internal mutex too, for its own rings.  So
we were double locking a piece of fairly high used code.  Switching to
the single-writer, single-reader queue seems to have been a win so
far.

Actually, it turns out we never call remove from the worker threads
(at least not for the main event_pollset)..

In regards to reducing latency, I'm thinking about using the
apr_pollset_wakeup API, which uses an internal pipe under the covers.
I was disappointed however that it didn't yet support using EventFD or
kqueue user events... but that should be a relatively easy set of
internal improvements inside API.

>> This would remove the 4 or 5 separate timeout queues we have
>> developed, and their associated mutex, and basically move all of the
>> apr_pollset operations to the single main thread.
>
> The 4 or 5 separate queues give us a simple and cheap way to know when the
> timeouts have expired.  If we remove them, how do we maintain time order or
> otherwise do the job as cheaply as we are now?

I've kept the separate queues, you are right its much easier to
maintain, I've just removed all the locking around them.

Thanks,

Paul

Mime
View raw message