httpd-modules-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sorin Manolache <>
Subject Re: How to control block size of input_filter data
Date Tue, 12 Mar 2013 10:17:07 GMT
On 2013-03-12 10:52, Hoang-Vu Dang wrote:
> Thank you for the quick reply !
> The context is what I am looking into right now, and It is indeed a
> right solution to my original question. I just want to know a little bit
> more detail if you do not mind, you said:
> "I typically destroy it by placing a callback in the cleanup hook of the
> req->pool. "

Now I remember: I use C++ so I need to create and destroy the context. 
But if you allocate your context from an apr_pool you don't need to 
bother about the context destruction because it is automatically 
destroyed. Sorry for confusing you.

Just for information, I create/destroy the contexts like that:

flt_init_function(ap_filter_t *flt) {
    // I use C++, if you allocated ctx from a pool, you don't even need 
this callback destruction
    flt->ctx = new MyContext();

    // delete_ctx is called when r->pool is destroyed, i.e. at the very 
end of the request processing, after the response has been sent to the 
client and the request logged.
    apr_pool_cleanup_register(flt->r->pool, flt->ctx, (apr_status_t 
(*)(void *))destroy_ctx, apr_pool_cleanup_null);


destroy_ctx(MyContext *ctx) {
    delete ctx;
    return APR_SUCCESS;

The filter function could be something like:

input_filter(ap_filter_t *f, apr_bucket_brigade *bb, ap_input_mode_t 
mode, apr_read_type_e block, apr_off_t bytes) {
     MyContext *ctx = (MyContext *)f->ctx;

     switch (ctx->state()) {
     case FOUND_EOS:

> What exactly is the callback function that I need to look for ? When it
> executes, can we be sure that all the data has been processed, and our
> ctx will be maintained at that state ?
> Best, Vu
> On 03/12/2013 10:36 AM, Sorin Manolache wrote:
>> On 2013-03-12 10:16, Hoang-Vu Dang wrote:
>>> Hi all,
>>> When I write an input_filter, I notice that the data sent from client is
>>> not always available in one chunk if it's large.
>>> In other words, The input_filter() function will be called multiple
>>> times per request. My question is how to have a control on this (for
>>> example the size of the chunk until it breaks in to two) ? what should
>>> we look into in order to check if the two filters are called from the
>>> same request.
>> You can keep the state from one filter invokation to the other in
>> f->ctx, the filter's context.
>> There are many ways to do this.
>> One way I've seen is to check if f->ctx is NULL (if it was NULL then
>> it would mean that it is the first invokation of the filter). If it's
>> NULL, we build the context. Subsequent invokations have the context !=
>> NULL. You'll have to destroy the context at the end of the request. I
>> typically destroy it by placing a callback in the cleanup hook of the
>> req->pool.
>> Another way to destroy it, but in my opinion a wrong way, is to
>> destroy it when you encounter EOS in the data processed by the filter.
>> I'd say it's wrong because a wrongly written filter could send data
>> _after_ an EOS bucket and then you could not distinguish between a new
>> request and a request sending data after EOS.
>> Another way to initialize the context is by placing a filter init
>> function when you declare the filter and to initialize the context in
>> this function. This is more elegant in my opinion, because the context
>> is already initialized when the filter is called the first time.
>> The filter context could be any structure, so you can track the filter
>> processing state in it.
>> Regards,
>> Sorin

View raw message