httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r..@ai.mit.edu (Robert S. Thau)
Subject Threaded server code...
Date Tue, 02 Apr 1996 20:25:51 GMT
Well, to go with the NT port, I might as well show people what I've
been up to with threading.  NB this is new, shaky code; don't take it
for more than it's worth.

Briefly, I've put up source for a user-mode threaded server in

  ftp://ftp.ai.mit.edu/pub/users/rst/test.tar.gz 

This package comprises: 1) A user-mode threads package I wrote,
tentatively named Arachne (though I may decide to go egotistical and
call it Really Simple Threads); 2) a version of Apache which has been
modified so that the child processes each handle multiple processes in
separate threads.  (There are still multiple child processes).

(Yes, native thread support is obviously the way to go on platforms
that support it, but not every platform does, and we probably want an
alternative to one-process-per-active-connection on the platforms that
don't.  Arachne lets us do that without major changes to the API (as
would be required for the state-machine approach).  As to portability,
I've managed to get the threaded server to run and at least pass smoke
tests on quite a few systems, including hard cases such as AIX and
HP-UX; Arachne manages at least basic context-switching on several
more machines where I haven't gotten around to testing the server yet.
This includes several systems without pthreads ports --- in fact, my
whole package is about the size of pthreads' machine-dependant support
files for one processor).

Now for the bad news.  The Apache code is from a development snapshot
which is, at this point, more than a week old.  (Apologies about not
keeping up, but when I'm doing major surgery, I don't want the patient
squirming under the knife).  Additionally, it needs work in at *least*
the following areas:

1) There is a limit to how many threads an individual process ought
   to support --- at the very least, we don't want them running out
   of file descriptors.  Right now, nothing checks.

2) I've arranged to use thread-blocking (as opposed to process-blocking)
   I/O when talking to the client, or to CGI scripts --- though Ben
   Laurie informs me that to make that work on SCO, the spawn_child
   code in alloc.c will have to be hacked to use socketpair() and not
   pipe().  It still needs to be fixed so that the wait() in pool
   cleanup only blocks the thread that's doing the cleanup.

   [ The I/O changes were actually pretty simple --- I just modified
     the buff.c code to call the threads package's thread-blocking I/O
     primitives rather than invoking read() and write() directly; I
     then figured out that I had to do something about scripts, and
     the easiest thing was to make them use the buff.c code instead
     of stdio.  NB there are no locks on this, which isn't a problem
     now, but would be if multiple threads ever wanted to do I/O on
     the same BUFF --- something most likely to come up with, e.g.,
     log files.  There may also be something like this in the proxy
     code, which I haven't tried to run threaded yet.  ]

3) DNS is a problem.  Specifically, the code needs to do something to
   avoid blocking a request-handling process when doing the moral
   equivalent of gethostbyname().  (Note that a shared-memory cache
   which is maintained by the server processes in collusion does not,
   by itself, solve the problem --- it would reduce the frequency with
   which gethostbyname() was called, but it would still get called
   often enough, blocking for up to five seconds each time).

   The simplest thing is probably to punt gethostbyname() to an
   auxiliary process, and use thread-blocking I/O to talk to that
   process.  If we can manage to toss in a cache which doesn't block
   threads on hits, that's great too (modulo the recent discussion on 
   caches and DNS time-to-live values in http-wg, which is quite
   relevant to proxies).

4) Even with DNS lookups and CGI scripts disabled, it still tests
   slower than the non-threaded version of the server (specifically,
   the development snapshot I'm using as base code, compiled with the
   same options and run off the same config files).  I'm not sure why
   this is.  Giving the thread which does accept() a priority boost
   turns out to make a big difference, but it does not eliminate the
   effect entirely; it's possible that there's some other scheduling
   effect I've overlooked.

   (NB the tests were done with a pool of five threaded processes.
   With that size pool, the kernel manages to find something to run
   essentially all the time --- the CPU idle time, as displayed by
   'top', is negligible.  This is why I suspect scheduling effects,
   but anything's possible).

5) Kept-alive connections are presently tossed after a very short
   timeout.  There are good reasons for doing this --- with an entire
   process tied up hanging on the line for each connection, the cost
   of keeping one open for more than a fairly short time is
   prohibitive.  However, log-file trace studies have shown that there
   would be a payoff to keeping them open far longer, if you could.
   So, in a threaded server, we might want to revisit this tradeoff.

In short, this code is nowhere near ready for prime time.  Personally,
I'm not considering it for the next release, but as exploratory work
for the one after that.  Still, if anyone has thoughts about the above
issues, I'd like to hear it; also, if anyone is putting serious work
into something which *can't* run threaded, they might want to start
looking at the relative costs and benefits of the alternatives.

(As a sidebar of sorts, I'd like to note that all of the issues I
raised above would come up with any server in which a single process
could serve multiple requests at the same time, whether that effect
were achieved via threads or some other means).

Remember, this is draft, shaky, pre-alpha code; still, I thought some
people might find it of interest...

rst

Mime
View raw message