httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Bloom <>
Subject Re: [PATCH] Take 2 of the http filter rewrite
Date Mon, 24 Sep 2001 04:35:31 GMT
On Sunday 23 September 2001 09:22 pm, Justin Erenkrantz wrote:
> On Sun, Sep 23, 2001 at 08:34:00PM -0700, Ryan Bloom wrote:
> > I don't see how this works at all for pipelined requests.  You moved the
> > HTTP_IN filter from a connection filter to a request filter. But, the
> > only connection filter we have is the CORE_iN filter.  All that filter
> > does, is to pass the socket up to the HTTP_IN filter. The problem is that
> > HTTP_IN will be destroyed in between requests, so the second request will
> > be lost.  I am continuing to read the code, so I may see what I am
> > missing, but in the mean time, Justin, if you could explain that to me,
> > it would make my review go a lot faster.
> Yeah, that's the one tricky part about this patch.  Remember that we
> no longer read more than is explicitly asked for from the socket.
> If the request has a body, it must have Content-Length or
> Transfer-Encoding and we now know when to send EOS bucket in these cases.
> (In theory, I think you can do a POST with a Connection: Close -
> half-close the socket, but that's ineligible for pipelining anyway).  So
> in these cases, the extra information is left unread by ap_http_filter
> or in the core input filter (which really does nothing - the new request
> is just left unread at the socket).

I don't see this at all.  It is not possible to specify how much is read from the
socket itself.  I just read through the bucket_socket code, and we always
try to read APR_BUCKET_BUFF_SIZE.  This seems to mean that we are
going to read more from the socket than we should, and the second request
will be lost.

> You do seem to be right - when we read a line at a time
> (readbytes == 0), we could read too much.  However, I think we could
> register a cleanup with the request pool that setasides everything
> that was read but not processed (i.e. all buckets except for the
> socket) to now live as long as the connection pool.  The question
> now becomes how do we retrieve this data back.  My first guess would
> be to store it via a pool userdata option, but maybe we could shove
> it back down into the core (which definitely breaks some abstractions).
> So, I think I'm leaning towards the userdata - use "HTTP-IN-<tid>"
> to specify the uniqueness should work.

No.  I fought against this VERY strenuously last time, and I will fight it again.
This is why it needs to be a connection filter.  To keep the abstraction valid,
the data being read from the socket is stored in the filter's context, the same
way it is for the output filters.

> The only real question is what happens when readbytes is -1 - a
> higher-level filter explicitly asks for everything.  We'd then do a
> blocking read for the entire socket until it is exhausted.  I think
> the current code would have problems with it as well anyway.  I'm not
> really sure what we can do here.  Thoughts?  -- justin

The current code doesn't have problems with this, because we rely on
the fact that the HTTP_IN filter understands HTTP.  It knows when the request
is done. Anything else is invalid.


Ryan Bloom
Covalent Technologies

View raw message