httpd-apreq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <>
Subject Re: dev question: apreq 2 as a filter?
Date Fri, 23 Aug 2002 07:46:24 GMT
Joe Schaefer wrote:
> Stas Bekman <> writes:
> [...]
>>as you can see the input filter that saw the body was invoked *after* 
>>the response phase has finished. So my question was, how to force the 
>>connection filter to request the next brigades which include the body, 
>>if nobody else does that. This part can be very tricky if you understand 
>>what I mean. I hope Bill can see the problem here, unless I miss something.
> I see the problem.  However, don't we have the exact same problem
> with the current code?  I mean, if the reported Content-Length is
> too big, WE don't attempt to read any POST data.  We also give up
> if we've accumulated too much data.

No, the problem I'm referring to is how to invoke the filter in first 
place. It won't be invoked if the response handler won't call 
ap_get_brigade. Hmm, I think I know how this should work.

Any time anybody does any enquiry from apreq, we check a flag whether we 
have the data consumed and parsed (which is done already). If it wasn't 
consumed yet, apreq inserts its input filter and performs the 
ap_get_brigade call.

Bill, please correct me if I'm wrong as I see the corrected picture in 
my mind:

apreq is split into 2 parts: the warehouse and the filter.

The warehouse is invoked from HTTP response handler by simply performing 
*any* call into apreq_, which essentially asks for something. the 
warehouse looks whether the body has been consumed already, if it was 
and parsed it answers the quiery. If the data wasn't consumed yet, the 
warehouse inserts apreq filter as the last request input filter and 
immediately calls ap_get_brigade till it gets EOS bucket or it decides 
to terminate the sucking action (e.g. because of POST limit was exceeded).

The filter is really just a sucker which feeds the warehouse which does 
the parsing and storing of the parsed data.

hmm, for some reason I think that we end up using the current apreq 
model, just that it gets feeded from its own filter, which can be 
eliminated altogether.

the point is that you cannot invoke the apreq filter by itself, somebody 
has to invoke it (inserting is not enough), that somebody is the 
response handler, so we return to where we have started, not really 
needing any filter stuff at all.

> In the 1.3-ish past, I'd assumed that the proper course of action for 
> these situations was to instruct apache to shut down the 
> connection.  Otherwise (say with keepalives on) the client will
> send the post data and apache will treat it as a new, malformed 
> http request.

I think that this part is of a later concern, but as Bill has mentioned 
before discard_request_body() will probably take care of it.

For future optimizations, I can see the situation where the lazy mode 
can be used, e.g. don't consume the whole body as long as you have 
satisfied the query. e.g. the form data followed the file upload, but 
the form wasn't filled properly so we don't care about the file because 
we want to return user the form to complete again.

Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker     mod_perl Guide --->

View raw message