httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexei Kosut <ako...@leland.Stanford.EDU>
Subject Re: apache-nspr-01.tar.gz
Date Mon, 27 Apr 1998 09:33:05 GMT
On Mon, 27 Apr 1998, Dean Gaudet wrote:

> - buffered i/o:
>     They don't have any buffered I/O... I'm trying to figure out the
>     cleanest way to put BUFFs on top of the layers.  I'll probably make
>     the buffering a layer itself... but there's no flush function to
>     run down the layers.

Hmm. I haven't looked at NSPR any, but thinking back to BUFF, and the work
Ed and I did this summer to define an API for how a layered version of
BUFF would work (which, btw, would only be a few hours of work to
implement it, at least the BUFF part; we just never did), here's one
solution:

Take the NSPR layers, and put our BUFF code on top of them. i.e., make all
layers (optionally) buffered, and define mechanisms to flush and do other
such things to them. I don't know what sort of mechanisms NSPR has that
could allow for this sort of thing, but it could be hacked in. For
example, some special pointer value, like (void *)(int)42 - something not
likely to be in use - could be passed to the write function to mean
"flush." Obviously, that's not the best solution, but something like that
could work.

(Although buffered reads still cause big problems - see Ed's and mine
paper for details on that - it's in apache-2.0/stacked-io).

>     I think they think that buffered i/o can just sit on top of all the
>     layers.  That's fine... until you consider pipelined connections,
>     and chunking.  I want chunking to be a layer; but if it's below
>     the buffering layer then it will shred up the output into many
>     packets... either that or we implement buffering in the chunking
>     layer too.  Which isn't actually that bad -- but again, there's no
>     flush function.

I think the lack of buffered i/o is a bad thing. And special-casing
certain conditions won't help. Consider, for example, a filter module
(and I definitely think we need them) that does something with a string,
tokenizes it in the same way chunking does, for example, or maybe does
some compression. Now let's say our bottom-level content generator calls
ap_rputc(), putting one character at a time into the stream. If we have,
say, ten of these filters, plus chunking, plus maybe SSL or HTTP-NG,
without filtering, each character causes all 12 filters to go through all
their overhead (function call, setup, tokenizing output, etc...). For a
10000-character document, that's 120,000 times. Now consider a buffered
version, where the bottom-level filter waits until 4096 characters have
been read, and then passes it up. Even if the fifth filter up spews
single-character writes also, its output will get buffered before being
passed on, so there will be 10,000*12/4096 = 30 filters called. Compared
to 120,000, that's a *huge* savings, even if the overhead is small.

Admittedly, this example is contrived, but it's I think similar to things
that are going to be desired. I can't see this making a good web server.
Buffering is definitely desired, IMO.

>     I can't see how they would ever get more than one response per packet.
>     Maybe they're happy with "shredding" the pipeline, but I'm not.

Me either.

-- Alexei Kosut <akosut@stanford.edu> <http://www.stanford.edu/~akosut/>
   Stanford University, Class of 2001 * Apache <http://www.apache.org> *



Mime
View raw message