httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Querna <>
Subject Re: Events, Destruction and Locking
Date Tue, 07 Jul 2009 14:05:21 GMT
On Tue, Jul 7, 2009 at 10:01 AM, Graham Leggett<> wrote:
> Paul Querna wrote:
>> Yes, but in a separate process it has fault isolation.. and we can
>> restart it when it fails, neither of which are true for modules using
>> the in-process API directly -- look at the reliability of QMail, or
>> the newer architecture of Google's Chrome, they are both great
>> examples of fault isolation.
> As is httpd prefork :)
> I think the key target for the event model is for low-complexity
> scenarios like shipping raw files, or being a cache, or a reverse proxy.
> If we have three separate levels, a process, containing threads,
> containing an event loop, we could allow the behaviour of prefork (many
> processes, one thread, one-request-per-event-loop-at-a-time), or the
> bahaviour of worker (one or many processes, many threads,
> one-request-per-event-loop-at-a-time), or an event model (one or many
> processes, one or many threads,
> many-requests-per-event-loop-at-one-time) at the same time.
> I am not sure that splitting request handling across threads (in your
> example, connection close handled by event on thread A, timeout handled
> by event on thread B) buys us anything (apart from the complexity you
> described).

It breaks the 1:1: connection mapping to thread (or process) model
which is critical to low memory footprint, with thousands of
connections, maybe I'm just insane, but all of the servers taking
market share, like lighttpd, nginx, etc, all use this model.

It also prevents all variations of the slowaris stupidity, because its
damn hard to overwhelm the actual connection processing if its all
async, and doesn't block a worker.

View raw message