httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject Re: Apache 2.0 ideas
Date Tue, 03 Nov 1998 18:08:57 GMT


On Tue, 3 Nov 1998, Jim Gettys wrote:

> No, from userland, the fastest server will be one which caches (small)
> objects in memory, and then does a single send() of the cached memory.
> 
> File opens are expensive.  Save sendfile() for big objects, where the
> open overhead isn't significant.

We can argue about it, but the best thing would be to measure ;) 

open()s aren't as expensive under linux as they are elsewhere... and
sendfile() isn't "thread safe" in the sense that you can use a single fd
with multiple threads (so caching open fds isn't worth it).  Linus keeps
claiming that open() is the way to go, it'd be worthwhile to prove or
disprove his claim. 

To cache things in memory requires synchronization between threads... to
use open() lets the kernel do its best job of synchronization... which is
really where I prefer to let that happen.  If userland could do fancy
spinlock tricks I wouldn't worry about it so much.  But those are
extremely non-portable.  I'd rather give the kernel as many opportunities
as possible to parallelize on SMP systems.  ('cause then it's the kernel
folks' problems to make things go fast ;)

> Fundamentally, for a pipelined server with good buffering, you can end 
> up with much less than one system call/operation.  This is what makes 

Yeah I showed this with apache 1.3 with a few small tweaks -- the main one
required is to get rid of the calls to time() and use a word of shared
memory for the time.  (This is functionality the kernel/libc folks should
provide, either through shared mem or through the now ubiquitous time
stamp counters on all modern processors.)  I showed 75 responses in 21
syscalls.

> load (when cycles are scarcest); this is the ideal situation.  A web
> server probably can't be as simpleminded, but you get the idea anyway.

In theory it can -- if you're doing userland threads and they're
multiplexed with select() then you get much of the benefit of how X works. 
That's why I find the userland and userland/kernel hybrid approaches to
threading so much more interesting than pure kernel threads. 

(Note:  I know we could write a webserver without threads, much like
squid, but it couldn't be apache then -- it's too hard to do general
module support without threads or processes.) 

> The problem this model faces for a Web server is how the server gets
> informed that its underlying database is different, so that it can't
> trust its in memory copy.  I leave this as an exercise to the readers :-).

The web server has one other thing going for it in kernel land -- intense
usage of cached disk data.  X doesn't have that.  For example, a cached
1Mb file requires 256 4K pages.  If you've got an intelligent network card
you can completely avoid 256 TLB misses on each response doing the work in
the kernel -- or by providing a sendfile()-style interface... anything to
avoid the need for v->p mappings.

I really have to put a caveat on all of this:  I'm just blowing hot air, I
haven't measured any of this, and I'm not likely to do it soon. 

Dean



Mime
View raw message