httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Querna <>
Subject Re: Events, Destruction and Locking
Date Tue, 07 Jul 2009 06:03:45 GMT
On Mon, Jul 6, 2009 at 10:50 PM, Justin Erenkrantz<> wrote:
> On Mon, Jul 6, 2009 at 10:20 PM, Paul Querna<> wrote:
>> I am looking for an alternative that doesn't expose all this crazyness
>> of when to free, destruct, or lock things.  The best idea I can come
>> up with is for each Connection, it would become 'semi-sticky' to a
>> single thread.  Meaning each worker thread would have its own queue of
>> upcoming events to process, and all events for connection X would sit
>> on the same 'queue'.  This would prevent two threads waiting for
>> destruction, and other cases of a single connection's mutex locking up
>> all your works, essentially providing basic fault isolation.
>> These queues could be mutable, and you could 'move' a connection
>> between queues, but you would always take all of its events and
>> triggers, and move them together to a different queue.
>> Does the 'connection event queue' idea make sense?
> I think I see where you're going with this...being so dependent upon
> mutexes going into a jungle full of guerillas armed with
> only a dull kitchen knife.
> So, a connection gets assigned to a 'thread' - but it has only two
> states: running or waiting for a network event.  The critical part is
> that the thread *never* blocks on network traffic...all the 'network
> event' thread does is detect "yup, ready to go" and throws it back to
> that 'assigned' thread to process the event.  Seems trivial enough to
> do with a serf-centric system.  =)

Yes, I think the connection having the "two states: running or waiting
for a network event" is the key to making this work.  The
thread-stickiness is really just the conceptional model, but basically
if a connection is already 'running', all other events that would of
fired for it, like a timeout, would just queue up behind the running
operation, rather than running directly on another thread.  This start
solving a multitude of locking and cleanup issues (i think).

View raw message