httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <>
Subject Re: Proposed connection state diagram
Date Mon, 05 Sep 2005 20:34:21 GMT
On Sep 5, 2005, at 9:17 AM, Paul Querna wrote:

> Right now, I am trying to understand the implications of a Handler
> state. It says we only shift states from Handler -> Write Completion
> once we have the 'Full Request Generated'.  So does this mean the
> Handler will write to the client at all?

Yes, in this model the Handler can do writes.  There are a few use
cases where this seems useful:

- Regular sending of files: If the handler is allowed to write,
   we don't have to wait for a context switch and poll before
   beginning to send the response.

- Handlers that produce a lot of non-file output: If someone writes
   a filter that produces a megabyte of output, consisting of a bunch
   of heap or pool buckets, it's better to not try to buffer up the  
   response before beginning the write.

> In my insane vision that I have never written down anywhere before, I
> thought we would just save brigades onto a buffer inside the C-O- 
> F.  As
> soon as *any* data was buffered it would go into the write completion
> state.  I guess it actually makes sense to have a Handler state.  When
> we are in the Handler state, we know more data is coming.  Once we are
> in write completion, as soon as we are done sending data, the  
> request is
> finished.

Yeah, my rationale was that it was important to differentiate
"handler might still send some more data to this output filter" from
"handler is done sending data to this output filter," so that we could
ensure that only one thread at a time is working with the connection.

I'd originally though about having C-O-F hand off brigades to an
event-handling threadpool, so that the worker threads running
handlers wouldn't do any actual writes to the client.  But I gave up
on that idea because making the bucket and pool allocators
thread-safe would involve a lot of overhead.

It's worth noting that my event_output_filter prototype ended up being
a bit messy because I made the filter responsible for determining when
the transition from Handler to Write Completion state had occurred.
To do this, it had to apply knowledge about the patterns of buckets
that the core happens to generate.  This, along with the logic for  
when to actually do a blocking write upon seeing a flush bucket, could
probably be a lot cleaner if we modified the core to change the state
from Handler to Write Completion, rather than making the filter try to
figure it out.

>> Comments welcome... Am I missing any state transitions (particularly
>> for error cases)?
>> Should there be an "access logger" state?  Are there other good use
>> cases to consider
>> for a nonblocking handler feature, besides mod_proxy?
> Logging Sucks.  On Fast Machines, it wouldn't be a huge hit, and most
> likely could run inside the event thread.  However, we also might  
> not be
> logging to a file... there are plenty of other modules that log to
> remote machines too.  I guess my view is that we should let any  
> blocking
> operation be handled by a Worker Thread, and Logging still seems like
> one that cannot always be distilled into non-blocking operations.

That makes sense.  And the example of nonblocking writes to a logger
on a remote machine makes me think that we really need to support
nonblocking socket I/O for sockets in general--not just sockets that
happen to be connections from clients.

Supporting the general case of a generic socket poller is a little  
because of the need to support different timeouts on different sockets
in the pollset.  E.g., if the pollset contains two descriptors with  
3 and 10 seconds in the future, respectively, and you add a new
descriptor with a timeout 5 seconds in the future, this new timeout
needs to be added in the middle of the timeout queue--and if there's
a poll event on the descriptor that subsequently cancels the timeout,
the timeout needs to be removed from whereever it is in the queue.
A timing wheel might be an effective solution:


View raw message