httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject Re: Apache 2.0/NSPR
Date Fri, 11 Sep 1998 15:48:50 GMT


On Thu, 10 Sep 1998, Simon Spero wrote:

> 2) Layering is dangerous to performance, and should be collapsed as much
> as possible. This is part of the job of the middle end.

This was part of why I was prompted to say we should look at zero-copy,
"page based" i/o... in particular, the chunking layer doesn't need to
modify anything.  It just needs to build iovecs.  But without reference
counting on the buffers we have to actually copy the contents rather than
just pass them around...

> 4) A lot of the cache could be made kernel resident. This means that  
> although cache invalidation can be complicated (objects can have a
> validate method), simple tests should be handlable simply and predictably-
> for example file modification date, eq checks for negotiation etc.
> Otherwise you have to leave the kernel to validate.

For some sites it'd be sufficient to invalidate the entire cache on a
regular basis.  That's pretty easy to do.  But yeah invalidation is in
general a painful problem...

> 5) The cache allows more objects than normal to be treated as files (i.e.
> they have finite length, rather than being data streams). This makes it a
> lot easier to attach file system protocol front ends to the middle-end
> namespace (in particular code from Samba and the old user mode NFS
> server). This allows the server to mediate *all* access to the data store,
> making it easy to use *active* invalidation, with no validation checks at
> all in the fast path.

Hmmm... interesting.

On a related note, we need to abstract those bits of the filesystem which
we need to serve HTTP so that backing store doesn't have to be a
filesystem.  I'd say this should even go as far as modules being able to
open() a URI and read from it -- so that it can be as transparent as
possible.  So rather than use ap_bopenf() directly (the apache-nspr
equivalent of fopen()), modules open a URI through some other interface,
and that may go to the filesystem or go to a database/whatever.

A difficulty in this is access control -- there's a difference between an
external request requesting some files, and an internal module requesting
them.  Different rights. 

> 6) This implies that the namespace model should be mappable in terms of
> directories, files, and specials (cgi-scripts, etc). This gives the
> hierarchical component of the resolution process a higher priority than
> the other phases. 

I'd like to see the namespace have "mount points" somewhat like the unix
filesystem.  This controls the hierarchy as far as what the underlying
store is... and it's a simple system, easy to optimize for.  i.e. I'd
really like to avoid "if the URI matches this regex then it's served by
database foobar".  That's far too general.

On the other hand, I'm happy if filters are able to do "wildcarding" like
that.  But I think it's wrong for them to do it based on the URI --
instead they should be based on content-types, or other (abstract) 
attributes.  For the filesystem underlying store we'll still derive
content-type from the filename (well, on the Mac or OS/2 I could see it
going into the metadata).  But from the point of view of the server,
that's an implementation detail in a lower level backing store.

> 7) TCP changes: There are several changes that would be useful for a web
> server. 
> 	7.3) Use Zero-copy w/ page flipping. Especially if you have a
> 	kernel cache. 

It's so nice that yummy hardware capable of doing this will be commodity
on PCs in a year or two... gigabit ether cards have all the on-card smarts
to do zero-copy.

> 	7.4) Allow connect_and_write (send data with syn)
> 	7.5) Allow accept_and_read( read data with syn, delay syn-ack).

I disagree... but only because I'm a linuxhead and syscalls are cheap ;) 

Although there are a lot of similar optimizations that I would still like
to see -- specifically in the way of socket/fd options.  It's annoying
that every new connection has to be set to no-nagle, and needs to be set
non-blocking.  That stuff should be inheritable, or be part of the
accept().

accept() is somewhat of a poor child compared to open() ... open() gets to
have flags associated with the FD for free. 

Proxies want the same treatment for socket(). 

Plus there's a need to handle closing thousands of FDs on a
fork()/exec()... unix FDs just weren't designed for big threaded servers
with thousands of FDs.  You either pay at every socket creation, or you
pay hard at each fork()... The problem is that fork() isn't the primitive
we want -- we want CreateProcess(). 

How many programs actually *use* fork() ?  Sure Apache 1.x does.  In some
sense so does inetd and related ilk.  But they don't have to.  They'd be
much better off with threads and CreateProcess().  NT gets this right. 

Anyhow I had a proposal for this which adds a new set of "extended" file
handles... which have the exact semantics that threaded servers want. 
Essentially they're shared (not copied!) across fork()/clone() but not
across exec().  So a CreateProcess() amounts to a fork(), a few dup2()s,
and an exec(), reducing the overhead.

(Why do I care about CGIs?  'cause they're something that gets
benchmarked...) 

Dean



Mime
View raw message