httpd-apreq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <>
Subject Re: dev question: apreq 2 as a filter?
Date Fri, 23 Aug 2002 09:39:45 GMT
William A. Rowe, Jr. wrote:
> At 02:46 AM 8/23/2002, Stas Bekman wrote:
>> Joe Schaefer wrote:
>>> Stas Bekman <> writes:
>>> [...]
>>>> as you can see the input filter that saw the body was invoked 
>>>> *after* the response phase has finished. So my question was, how to 
>>>> force the connection filter to request the next brigades which 
>>>> include the body, if nobody else does that. This part can be very 
>>>> tricky if you understand what I mean. I hope Bill can see the 
>>>> problem here, unless I miss something.
>>> I see the problem.  However, don't we have the exact same problem
>>> with the current code?  I mean, if the reported Content-Length is
>>> too big, WE don't attempt to read any POST data.  We also give up
>>> if we've accumulated too much data.
>> No, the problem I'm referring to is how to invoke the filter in first 
>> place. It won't be invoked if the response handler won't call 
>> ap_get_brigade. Hmm, I think I know how this should work.
>> Any time anybody does any enquiry from apreq, we check a flag whether 
>> we have the data consumed and parsed (which is done already). If it 
>> wasn't consumed yet, apreq inserts its input filter and performs the 
>> ap_get_brigade call.
> Up to some, sane limit.  I wouldn't want us pulling more than 64k or so 
> without
> some extra thought.

of course.

>> Bill, please correct me if I'm wrong as I see the corrected picture in 
>> my mind:
>> apreq is split into 2 parts: the warehouse and the filter.
>> The warehouse is invoked from HTTP response handler by simply 
>> performing *any* call into apreq_, which essentially asks for 
>> something. the warehouse looks whether the body has been consumed 
>> already, if it was and parsed it answers the quiery. If the data 
>> wasn't consumed yet, the warehouse inserts apreq filter as the last 
>> request input filter and immediately calls ap_get_brigade till it gets 
>> EOS bucket or it decides to terminate the sucking action (e.g. because 
>> of POST limit was exceeded).
> Sounds sane.
>> The filter is really just a sucker which feeds the warehouse which 
>> does the parsing and storing of the parsed data.
> That was the direction I was thinking.


>> hmm, for some reason I think that we end up using the current apreq 
>> model, just that it gets feeded from its own filter, which can be 
>> eliminated altogether.
> And that POST data is still passed down the filter chain to be consumed
> in other interesting ways by modules like cgi [passed on to the cgi app.]
> It really isn't consumed, it's more like your snoop filter.

As I suggested before this can be configurable, it'll probably save some 
  memory if you know that you don't want the body anywhere but in the 
apreq's warehouse.

>> the point is that you cannot invoke the apreq filter by itself, 
>> somebody has to invoke it (inserting is not enough), that somebody is 
>> the response handler, so we return to where we have started, not 
>> really needing any filter stuff at all.
> Agreed, I don't want folks inserting it themselves.  You might end up with
> three copies in the filter stack.  They simply need to call the apreq_ 
> method
> which will then inject the filter as-needed.  Still, several modules 
> [filters]
> can all look at the same body, and we still pass the POST data on.  This
> is significantly more thorough than the current apreq model.


How do we protect from injecting the filter too late, if something has 
already pulled the data in? just document this potential problem?

>>> In the 1.3-ish past, I'd assumed that the proper course of action for 
>>> these situations was to instruct apache to shut down the connection.  
>>> Otherwise (say with keepalives on) the client will
>>> send the post data and apache will treat it as a new, malformed http 
>>> request.
>> I think that this part is of a later concern, but as Bill has 
>> mentioned before discard_request_body() will probably take care of it.
>> For future optimizations, I can see the situation where the lazy mode 
>> can be used, e.g. don't consume the whole body as long as you have 
>> satisfied the query. e.g. the form data followed the file upload, but 
>> the form wasn't filled properly so we don't care about the file 
>> because we want to return user the form to complete again.
> For reasons I stated before, such a module is not a healthy module.  Let us
> presume [for now] that the body is sucked by the handler in time for us to
> react and deal with the POSTed body.


Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker     mod_perl Guide --->

View raw message