httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rasmus Lerdorf <ras...@lerdorf.on.ca>
Subject Re: Apache 2 mutiple pools feature request
Date Wed, 03 Mar 1999 15:15:10 GMT
> Having "multiple pools" of different HTTPD children lying around just
> doesn't sound like good engineering to me.  First off, you have httpd
> processes running as a different UID's, and the implications for shared
> memory between these pools seem nontrivial.  Next you need to come up with
> an efficient way for processes to pass accepted connections from one to
> another, since they're all listening on the same port.  Finally how this
> plugs into a multithreaded 2.0 architecture puzzles me further.  

It would be cool if they could do some descriptor passing and listen to
the same port, but even without that it would still be useful.  Require
each pool to have either a separate port or a separate IP and treat them
as completely independent Apache instances.

> In fact we're really going to want something equivalent to a wire protocol
> approximating the Apache API anyways, for robustness, when we go
> multithreaded.  Apache is legendary for its stability, which owes no small
> debt to the multiprocess model, the fact that bad code only kills its own
> single connection.  When we go multithreaded, we run the risk of someone's
> bad module code (or worse, someone's bad perl code when used with
> mod_perl, or bad PHP plug-in code...) bringing down the whole server,
> clearly an untenable solution for ISPs or most anyone else.  Having N
> processes with M threads only reduces the problem by 1/N, and since we're
> going multithreaded to try and keep N as close as we can to 1....

I think as long as N>1 but still closer to 1 than in the pure
multi-process model, we still retain the stability while gaining some of
the benefits of the threaded model, assuming the crashing problem is
occasional.

> A more robust model is one where we have a stripped-down, basic
> multithreaded engine that handles a core set of requests (everything we
> can "guarantee" is thread-safe, i.e. what's in the modules/standard/
> base), and then pass off anything more complex to backend daemons.
> Nicely, this "pass off anything more complex" is precisely the same
> problem we will be facing in deciding how to facilitate speed freaks who
> want to put the basic HTTP engine into kernel-loadable modules, 
> calling out outside the kernel to userland for complex responses.
> 
> Think of it as inetd on steroids.  :)  I think this is really the better
> way to go, long term.  And I don't see why the dynamic content engine
> teams (perl, java, php) couldn't work together on a good relay protocol
> that got them most of what they needed from the Apache API, and for what
> they didn't get, provided a reasonable alternative.
> 
> It's useful to start doing this now, so we can understand better what we
> need the Apache2 binary API to need, as distinguished from what can be
> accomplished in the relay protocol.  And there's no reason we couldn't
> start playing with the relay protocol before 2.0's out, too.

How would this relay protocol work on WIN32?  At least on Win95/98 having
two processes running trying to speak to each other is nowhere near as
efficient as loading a DLL into the main process.

I also don't quite understand where the line between needing a backend
server vs. being a true module is drawn.  For example, would mod_status
now have to be rewritten as a standalone server?  What about mod_proxy?
mod_dav?  mod_cgi?  The last two would definitely benefit from the
different-user ability that this would bring.

The other issue is that this is radically different from how all the other
servers work.  We are currently working on an ISAPI, NSAPI, WSAPI, Apache
API abstraction layer which will move all server API related code into a
thin layer outside of PHP and thus making PHP a generic web server module
that can be used with any server.  Adding code to have PHP run as a
standalone daemon probably wouldn't be too hard, but architecturally it
would be a bit of a stretch.

If this IPC protocol thing does become a reality, I would probably just
provide it as an option for large ISP's who need the security benefits
it brings.  For large dedicated single-user sites, I doubt the IPC
approach can come anywhere close to the speed the true module architecture
brings so I think this model is likely to always be the primary approach.

-Rasmus


Mime
View raw message