httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Bloom <...@covalent.net>
Subject Re: Async I/O question?
Date Wed, 05 Dec 2001 22:04:34 GMT
On Wednesday 05 December 2001 01:28 pm, Justin Erenkrantz wrote:
> On Wed, Dec 05, 2001 at 01:10:42PM -0800, Ryan Bloom wrote:
> > 2)  In a partial Async model, like what Dean is suggesting, the
> > I/O thread needs to be able to accept multiple chunks of data
> > to be written to the client.  This would allow you to handle a flush
> > bucket, and the processing thread wouldn't stop processing the
> > request, it juts wouldn't wait to continue processing if the data
> > couldn't be written immediately.  The point is to get the processing
> > threads done with the request ASAP, so that they can handle the
> > next request.  The I/O for the threads can wait an extra 1/2 second
> > or two.
> >
> > Think of it as three threads all working together.
>
> <light bulb goes on>
>
> > Now, this can be improved even more by moving to a four
> > thread model, where one thread is dedicated to reading from
> > the network, and one is dedicated to writing to the network.
>
> Thanks for the clarification.  It helps tremendously.  So, we
> aren't talking about a pure async-model - just where we attempt
> to hand-off.  And, moving to a four thread model may be hindered
> by the specific OS - like /dev/poll can indicate reading and
> writing on the same socket.  Would we want two threads sharing
> ownership of the socket?  Perhaps.  However, something like SSL
> would complicate things (think renegotiations).

It can indicate reading/writing on the same socket, but we may not
want it to.  As for SSL, because we are doing encryption in memory
instead of directly to the socket, this shouldn't be a big problem.

> 1) Any transient buckets will have to be setaside in this MPM.
>    Is this a concern?  It seems that you also can't reuse the
>    same memory space within the output loop.  Once I pass it
>    down the chain, I must say good-bye to any memory or data
>    pointed within the bucket.  (We couldn't even reuse heap
>    data.)  Is this even a change from current semantics?

We'll have to set aside transient data.  We already say that filters
have to forget about any data once it has been passed down the
stack, so that isn't a change in semantics.

> 2) We could implement this solely by manipulating the socket
>    hooks you added, right?  Would there be any change external to
>    the MPM?  (I guess we wouldn't know until we tried perhaps...)

There shouldn't be.  A lot of the work I did a few weeks ago was to
help make this possible with the 2.0 architecture.  I have a few more
things that can be done with those changes, but those are more for
me to play with than useful projects.

> 3) In the read case, the I/O is directed to a specific worker
>    thread, right?  So, a worker thread makes a request for some
>    amount of I/O and it is delivered to that same thread (so we
>    can still use thread-local storage)?  The wait for data from
>    I/O thread in worker thread will be synchronous.

Presumably yes, but if this is designed correctly, we could move
to an async model for input too, where the thread that requested
the data may not be the thread that receives it.

> 4) What happens when all of the IO threads are full (and their
>    ensuing buffers are full too)?  Do we just force the worker to
>    wait?  In fact, I'd imagine this would be a common case.  The
>    worker threads should be fairly fast - the IO thread would be
>    the slow ones.

I don't think that has been fully designed yet.  I mean, minimally
it will have to wait, but the answer may also be to create a
second I/O thread to pick up some of the left overs.

Ryan
______________________________________________________________
Ryan Bloom				rbb@apache.org
Covalent Technologies			rbb@covalent.net
--------------------------------------------------------------

Mime
View raw message