httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Hyde <>
Subject Re: 2.0: process model design
Date Sun, 23 Nov 1997 17:18:29 GMT
Some more stuff to toss in the pot....

I'd be happy to see the job of monitoring the
set of processes and threads broken out into a
small isolated program.  That presumably must
be as robust as possible.  It also would, one
hopes be the only component that must communicate
with what every plays the role of service manager
on this platform.  For example I think the
service manager coupling on Win32 could be a
second executable.

On a more complex level...

Most apps I've worked on over the years use a
process model in the cooperative multiprocessing
family.  I've never seen a highly portable application
that used threads.  Their implementation varies
too widely in the cost/thread, the qualities of
the scheduling, and the semantics of thread local

At first I was a little surprised that Apache uses multiple
processes.  Then I saw it as a nice trick for managing the
leak/robustness problem.  It's cool, I thought, since the
amount of state in an HTTP server
seems intuitively to be so slight that starting up
a fresh one ought to be cheap.  After a few of those
"we have 1200 virtual hosts" bug reports I'm not so

The leak/robustness value of the current "have lots
of children and kill'm often ecology" is sufficent to make
precompile of the configuration data very tempting.

System's I've build usually have a very high bandwidth of
and very small transactions.  In this case it's
usually best to bit the bullet and write everything
with a cooperative multiprocessing model and ONE stack.
each "microthread" in such a system then must "pack
it's bags" prior to relinquishing the processor to
the top level scheduling loop.  This is was the design
I expected to see when I started into the Apache code.

Of course a model like this is a burden to implement
inside of.  Each and every one of the "sips of processing"
can screw up the entire thing in one of three ways: blocking
on I/O, starving out it's neighbors, or leaving a shared 
data structure in midtransaction.  One nice aspect of this
approach is that you don't need mutex at all.

In this model what would have been the stack of each
thread is replaced with some more classic data structure.
Designing this is a bit of work, usually, but interestingly
enough it's already there in Apache i.e. the
request and server data structures.

Interupt processing (aka signals) is usually simpler
in this model since for most signals one just schedules
a microthread and returns to the microthread in progress,
presuming it will reliquish the process pretty quickly.  
Some systems like to keep a global where request
to reliquish can be denoted.

On a more mundane level...

It is very nice to have a "what are you doing?" signal.

I always end up adding a priority scheme of some sort,
and I find it is best if the priority is on the transaction
and not on the thread.  I don't know how many systems
I've had to rework that in.

There is enivitably a single line in the code that does all
blocking, people always forget to provide API to let
plug modules subscribe to it's services.

  - ben h.

View raw message