httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <>
Subject Re: select vs. threads vs. fork
Date Tue, 20 Apr 1999 04:56:02 GMT
On Mon, 19 Apr 1999, Nitin Sonawane wrote:

> wrote:
> > 
> > Well, when it's select based you can do your own "task management" meaning
> > no unnecessary context switches.  Not that there's much task management to
> > do.  Two selects and two for loops to feed all the outputs and read all
> > the inputs.  Even in a pre-forked or threaded server, system calls will
> > block you.  However, that shouldn't impact much on the throughput, because
> > all your I/O is buffered.  It's not as if when you're busy looking for a
> > file your TCP output queues are empty and waiting for input.  CGI's are
> > forked, so that shouldn't be a problem.
> Youre right that network/socket i/o is implicitly non-blocking (thank
> TCP for its window buffers). The issue Im unconvinced of is file system
> calls. Those can indeterminately block inside the kernel (unless your
> htdocs tree sits in your buffer cache). Inodes can be written
> asynchronously (eg., last access time stamps) but cannot be read
> asynchronously. Consequently all file system calls could get serialized
> inside the kernel. Its these calls that would end up throttling
> performance.

My argument is, they don't matter.  For a normal static content servers,
most data is cached since it's used over and over.  Checking vmstat, it's
normal to see at most 40 blocks being read in per second.  That's not much
data moving, and not much opporunity for I/O blocking.  What you seem to
be missing here, is that it *doesn't* throttle performance.  I think
everyone's aware that thttpd performs much better as a static content
server.  I think anyone arging differently hasn't actually used it.  What
I want to know, is how difficult it would be to use a similar model in

> As a ball park estimate consider a 10ms delay for every file open (very
> conservatively spread across inode reads and or directory reads), you
> couldnt possibly serve more than 100 static files per second.

Like I said before, you're not likely to be in a situation where you never
serve the same file twice.  I can easily serve move than 100 static files
per second with Apache.  I can easily serve more with a different server.

> The second issue I dont understand is 'unnecessary context switches'.
> Isnt it the case that the kernel is incessantly servicing peripheral
> interrupts. If so wouldnt context switches be almost unavoidable? 

Obviously context switches are unavoidable.  However, requiring the server
to switch between all 600 processes/threads has more overhead than
switching between one webserver process and various kernel processes.  On
most webservers (and certainly any that are relevant to this discussion),
there won't be word processors and rendering engines running in the

> Speaking of mit-pthreads, barring some overhead wouldnt such a server
> internally behave in the same manner as a select/poll based system. If
> yes, then we should be able to compile apache-apr with mit-pthreads and
> see how that performs in terms of sheer throughput.

I've never used mit-pthreads, so I can't comment.

> I dont mean to get into a flame war but rather a brainstorming of 'why
> would event driven servers ever perform better than a
> multiprocess/threaded server'. 

This isn't exactly an event driven server model.  What the server's doing
is checking its queue over and over, waiting for something to be done.  As
soon as there is something, it goes and does it.  Thats' quite different
from the multipricess model, where the kernel is busy flipping through the
different processes, using its normal process system which is quite suited
to handling generic processes, but obviously can't be as well optimized as
a task-written model.

tani hosokawa
river styx internet

View raw message