httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob McCool <>
Subject Re: non-forking again
Date Tue, 14 Mar 1995 21:58:48 GMT
 * "Re: non-forking again" by various people
 *    written Tue, 14 Mar 95 09:15:11 EST
 * Rob, are you out there?

I'm lurking about around here somewhere..

 * *) You've got to be quite a bit more careful with malloc() and
 *    friends --- memory leaks don't matter in a forking server, since
 *    the process is going to die pretty quickly anyway, but in a
 *    non-forking server they add up fast.  (Note that there are some
 *    portions of the existing code base which use strdup() uncomfortably
 *    freely).

If I remember the NCSA code right, it leaks memory and file
descriptors like a sieve. strdup() is used in a couple of places,
though most of the code opts for the stack-based approach of memory
allocation. At the time, doing all of that made sense...

 * *) You have to be sure to clear out per-transaction state before
 *    starting a new transaction.  For instance, you don't want the
 *    current transaction to be affected by an AddType directive which
 *    was read out of a .htaccess file the last time around.  This
 *    involves being very careful with global variables; the current code
 *    is unfortunately rather careless.

That too... there is no per-request "context" in the NCSA code.

 * A final note --- simple implementations of the NetSite-type "mob
 * process" mechanism (a bunch of processes all doing accepts on the
 * same socket) are vulnerable to the effect I discovered with my
 * MaxProcs experiment, namely that long requests (such as CGI hits
 * and server-side includes files with <!--#exec --> directives) tend
 * to starve out short ones, with deleterious effects on both latency
 * and throughput.  (I think Rob McCool has also seen this effect when
 * he was evaluating alternate NetSite designs --- Rob, are you out
 * there?)

Yes. For normal files, this is not a problem mostly because socket
buffers if large enough can allow you to turn over processes faster
than you disconnect clients (along with having SO_LINGER turned off). 

For CGI, a server could require more processes if these CGI programs
did a lot of stuff (especially if they were long-running CGI programs
implemented through shell scripts which can't detect a client

At some point, you should want to assert that N is as high as the
number of processes can get. Allowing uncontrolled forking is a bad
idea. The question becomes when do you spawn those N children, one at
a time, as they're needed then keep them around, or do the Netsite 1.0
approach and spawn them all to begin with in order to avoid the shared
memory requirements associated with coordinating growth and reduction
of a process set.

 * Because of this effect, people at sites with a substantial number
 * of CGI hits may get less than the benefit they expect from a
 * non-forking server, over a properly tuned forking server; in fact,
 * it's possible that things might actually get worse.  (Note that
 * this assumes both a properly tuned server --- neither the NCSA nor
 * CERN servers count for this --- *and* a substantial number of CGI
 * hits per minute).

They shouldn't get worse, though we've found that under certain
operating systems and under extreme load, using a persistent process
model causes the fork() for CGI to take an increasing amount of
time. If you don't allocate enough processes then performance will be

 * You could get around this by having a large number of processes in
 * the mob, at a nontrivial cost in swap space, or by arranging for
 * "long" transactions to be shelved if too many potentially short
 * ones are queued up behind, which is doable, but very very hairy.

There are other solutions, too, which we're looking into for the
netsite 1.1 release. Unfortunately I haven't devised one which works
without shared memory.

> The problem was only with in MP systems (well at least solrais 2.x and
> SGI),  If multiple processes did an accept, it was not defined what
> would happend.  

The BSD spec doesn't define it, but all BSD implementations take care
of this well. This includes SunOS, HP-UX, AIX, IRIX (even MP), OSF/1,
and BSDI. We've been having problems with this approach under Solaris,
both SP and MP systems.

I'd talk your ear off about the different approaches you can use to
this problem, or what tradeoffs you make when you decide to move away
from a simple architecture, etc etc but I'm not sure how much you guys
care about it.


View raw message