httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lieven Govaerts <>
Subject Re: Httpd 3.0 or something else
Date Tue, 10 Nov 2009 21:33:23 GMT
On Tue, Nov 10, 2009 at 6:10 PM, Greg Stein <> wrote:
> On Tue, Nov 10, 2009 at 11:14, Akins, Brian <> wrote:
>> On 11/9/09 3:08 PM, "Greg Stein" <> wrote:
>>> 2) If you have 10,000 client connections, and some number of sockets
>>> in the system ready for read/write... how do you quickly determine
>>> *which* buckets to poll to get those sockets processed? You don't want
>>> to poll 9999 idle connections/buckets if only one is ready for
>>> read/write.
>> Epoll/kqueue/etc. Takes care of that for you.
> Sorry. I wasn't clear.
> You have 10k buckets representing the response for 10k clients. The
> core loop reads the response from the bucket, and writes that to the
> network.
> Now. A client socket wakes up as writable. I think it is pretty easy
> to say "read THAT bucket" to get data for writing.
> Consider the scenario where one of those responses is proxied -- it is
> arriving from a backend origin server. That underlying read-socket is
> stuffed into the core loop. When that read-socket becomes available
> for reading, *which* client response bucket do you start reading from?
> And what happens if the client socket is not writable?
> You could just zip thru the 10k response buckets and poll each one for
> data to read, and the serf design states that the underlying
> read-socket *will* get read. But you've gotta do a lot of polling to
> get there.
> I think that will be an interesting problem to solve. I believe it
> would be something like this:
> Consider when a request arrives. The core looks at the Request-URI and
> the Headers. From these inputs, it determines the appropriate
> response. In this case, that response is identified by a bucket,
> configured with those inputs. (and somewhere in here, any Request-Body
> is managed; but ignore that for now)  As that response bucket is
> constructed, along with all interior/nested buckets, that construction
> can say "I've got an FD here. Please add this to the core loop." The
> FD would be added, and would then be associated with the response
> bucket, so we know which to read when the FD wakes up.
Suppose this is the diagram of the proxy scenario, where A and B are
buckets wrapping the socket bucket:

browser -->  (client fd)  [core loop]  [A [B [socket bucket  (server
fd) <-- server

If there's an event on the client fd, the core loop can read bytes
from bucket A - as much as the client socket can handle.

But if only the server fd wakes up,  the core loop can't really read
anything as it has nowhere to forward the data to.
The best thing it can do, is tell bucket A: somewhere deep down
there's data to read and considering I (the core loop) was alerted of
that fact there must be one of the other buckets B, C.. interested in
buffering/proactively transforming that data, so please forward this

I don't think the buckets interface already has a function for that,
but something similar to 'read 0 bytes' would do.

So, did I understand your proposal correctly?


View raw message