httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Stein <>
Subject Re: Httpd 3.0 or something else
Date Tue, 10 Nov 2009 23:13:40 GMT
On Tue, Nov 10, 2009 at 16:33, Lieven Govaerts <> wrote:
> On Tue, Nov 10, 2009 at 6:10 PM, Greg Stein <> wrote:
>> You have 10k buckets representing the response for 10k clients. The
>> core loop reads the response from the bucket, and writes that to the
>> network.
>> Now. A client socket wakes up as writable. I think it is pretty easy
>> to say "read THAT bucket" to get data for writing.
>> Consider the scenario where one of those responses is proxied -- it is
>> arriving from a backend origin server. That underlying read-socket is
>> stuffed into the core loop. When that read-socket becomes available
>> for reading, *which* client response bucket do you start reading from?
>> And what happens if the client socket is not writable?
>> You could just zip thru the 10k response buckets and poll each one for
>> data to read, and the serf design states that the underlying
>> read-socket *will* get read. But you've gotta do a lot of polling to
>> get there.
>> I think that will be an interesting problem to solve. I believe it
>> would be something like this:
>> Consider when a request arrives. The core looks at the Request-URI and
>> the Headers. From these inputs, it determines the appropriate
>> response. In this case, that response is identified by a bucket,
>> configured with those inputs. (and somewhere in here, any Request-Body
>> is managed; but ignore that for now)  As that response bucket is
>> constructed, along with all interior/nested buckets, that construction
>> can say "I've got an FD here. Please add this to the core loop." The
>> FD would be added, and would then be associated with the response
>> bucket, so we know which to read when the FD wakes up.
> Suppose this is the diagram of the proxy scenario, where A and B are
> buckets wrapping the socket bucket:
> browser -->  (client fd)  [core loop]  [A [B [socket bucket  (server
> fd) <-- server
> If there's an event on the client fd, the core loop can read bytes
> from bucket A - as much as the client socket can handle.

Right, and right.

> But if only the server fd wakes up,  the core loop can't really read
> anything as it has nowhere to forward the data to.
> The best thing it can do, is tell bucket A: somewhere deep down
> there's data to read and considering I (the core loop) was alerted of
> that fact there must be one of the other buckets B, C.. interested in
> buffering/proactively transforming that data, so please forward this
> trigger.

Buckets have a peek() function.

Hmm. Theoretically, the bucket is *empty* of contents, or you would
not have returned to the event loop. Thus, when the peek() rolls
around, the bucket is going to figure out what it can provide without

But... the buckets were designed for client-side operation. Buckets
are supposed to be emptied completely. That isn't true on the server:
the client socket might not be available for writing, so we don't
empty a response bucket to completion.

It does sound like something more may be needed, in order to propagate
some reading down the stack of buckets. But there is also a worry of:
if we read, then were do we put that, if the network isn't ready for

These read/status/nesting/etc concept are done in order to prevent
deadlocks. Ideally, *everything* is read and written to completion. An
appserver might not be able to provide you with more content, until
you give it something first. So the trick is to flush all writes, and
to flush all reads (because the latter might signal another write in
order to continue generating content... ad nauseum).

> I don't think the buckets interface already has a function for that,
> but something similar to 'read 0 bytes' would do.
> So, did I understand your proposal correctly?

Yes. But we may have some refining to do, as you've raised, and
looking more closely at the flows.


View raw message