httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Re: Filter I/O take 3.
Date Sat, 17 Jun 2000 13:00:59 GMT

> *) complexity due to the ioblock/ioqueue stuff
> *) I do not understand how sub requests are handled in this scheme
>    (essentially, how are the two sets of hooks joined together)
> *) if I fetch 100Mb from a database for insertion into the content stream,
>    how does that work in this scheme? (flow control, working set)
> *) flow control (I didn't see how this works)
> *) working set size (since all data must occur on the heap)
> I'm off to sleep soon, but today I'm going to work on fixing up the common
> stuff between the alternate schemes. For example, we agree that an
> "insert_filters" hook needs to be added. There are a number of ap_rputs()
> (and similar) calls that need to change to BUFF operations. etc.

A couple of things.  Don't do this!  By you making these changes now, you
break a patch that is being considered.  And they way you have started,
you have broken the patch completely.  This has the side effect of making
it impossible for somebody new to even apply my patch, and it makes
updating my tree a real mess.  I can not re-create the patch this
weekend, because I am at a wedding in NH and can't take the time to 
do it.  This effectively stops all discussion on this topic for a few
days.  Breaking a patch because two people are working in the same section
of code on two different projects, I can understand that.  Your patch
however basically does some of what I did (and it makes little sense to
have a separate function for this in my scheme).

Now, because you have done a wonderful job expressing where the hook
scheme falls down, allow me to do the same for the link scheme:

1)  Modules must maintain their own stat for the data they haven't dealt
with yet.  The hook scheme just has them return that data, and Apache
hands it back to them later.  BTW, the only way I can see to really solve
this is to either use iovec's inside the module (invalidating the
ioblock/ioqueue argument) or strcat data (invalidating the working
set/ stack/heap argument).

2)  Take a look at the bwrite code.  If we don't write enough data at
once, bwrite makes a copy of all of the data in order to concatenate it
together.  The hook based scheme minimizes this problem in two
ways.  1)  It encourages people to maintain big chunks to send at
once.  2)  There is an obvious optimization where we can combine data to
create larger chunks.  The link based scheme is going to promote writing
small chunks of data down the pipe.  IMNSHO, this invalidates the working
set/stack/heap argument, because both cases have this problem due to

3)  Modules must filter data without any context about what else is going
on around the data they are directly working on.  Unless of course they
maintain that data themselves.  There is an obvious change to the hook
scheme that allows us to send a minimum amount of data to a hook at any
one time.  This cannot be done for the link based scheme (at least not
cleanly IMHO.)

4)  We are jumping up and down the stack and basically arriving at the
same place.  The link based scheme encourages people to write small bits
of data to the next filter, which will make us jump up and down the stack
all the time.  The only solution to this is to use iovecs (Roy's bucket
brigade) which invalidates the ioblock/ioqueue argument.

I think that's it.  BTW, as far as getting negative feedback from the
network, Apache doesn't do that currently.  Apache checks to see how much
data is being written, and if it is less than a certain size, it buffers
it.  If it is larger, it writes.  Look for LARGE_WRITE_THRESHOLD in
buff.c.  The hook scheme easily duplicates this behavior and there is a
comment on how to do it.  In the patch I submitted there is a comment with
MAX_STRING_SIZE in it.  That needs to be changed to


Ryan Bloom               
406 29th St.
San Francisco, CA 94131

View raw message