httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <>
Subject Re: [PATCH] Take 2 of the http filter rewrite
Date Mon, 24 Sep 2001 04:22:15 GMT
On Sun, Sep 23, 2001 at 08:34:00PM -0700, Ryan Bloom wrote:
> I don't see how this works at all for pipelined requests.  You moved the HTTP_IN
> filter from a connection filter to a request filter. But, the only connection filter
> have is the CORE_iN filter.  All that filter does, is to pass the socket up to the
> HTTP_IN filter. The problem is that HTTP_IN will be destroyed in between requests,
> so the second request will be lost.  I am continuing to read the code, so I may see
> what I am missing, but in the mean time, Justin, if you could explain that to me,
> it would make my review go a lot faster.

Yeah, that's the one tricky part about this patch.  Remember that we
no longer read more than is explicitly asked for from the socket.

If the request has a body, it must have Content-Length or 
Transfer-Encoding and we now know when to send EOS bucket in these cases.
(In theory, I think you can do a POST with a Connection: Close - 
half-close the socket, but that's ineligible for pipelining anyway).  So 
in these cases, the extra information is left unread by ap_http_filter 
or in the core input filter (which really does nothing - the new request
is just left unread at the socket).

You do seem to be right - when we read a line at a time 
(readbytes == 0), we could read too much.  However, I think we could 
register a cleanup with the request pool that setasides everything 
that was read but not processed (i.e. all buckets except for the
socket) to now live as long as the connection pool.  The question
now becomes how do we retrieve this data back.  My first guess would
be to store it via a pool userdata option, but maybe we could shove
it back down into the core (which definitely breaks some abstractions).
So, I think I'm leaning towards the userdata - use "HTTP-IN-<tid>" 
to specify the uniqueness should work.

When the request is completed, we re-enter ap_read_request which sets
up a new HTTP_IN filter.  On the first read, HTTP_IN asks the core
filter for the socket (notice the change here - core will now *always*
return the socket when asked) and ideally, the next request should
have been left unread in the socket (or saved via userdata - see
above).  And, HTTP_IN will now process the new request.

The only real question is what happens when readbytes is -1 - a 
higher-level filter explicitly asks for everything.  We'd then do a 
blocking read for the entire socket until it is exhausted.  I think 
the current code would have problems with it as well anyway.  I'm not 
really sure what we can do here.  Thoughts?  -- justin

View raw message