httpd-apreq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bojan Smojver <>
Subject Re: Recall of input filter module on completion of output filterprocessing for a request???
Date Tue, 31 Aug 2004 06:38:24 GMT
On Tue, 2004-08-31 at 15:24, Joe Schaefer wrote:

> > Brigades are actually allocated from the heap directly, not from the
> > pool.

[... code ...]

> I would describe this as creating the brigade from the pool, and
> registering a cleanup that will delete any buckets which remain
> in the brigade.  *Those remaining buckets* may be heap-allocated,
> which is why the cleanup is important.

OUCH! I've been had! This is what the docs say about that function:

apr_bucket_brigade* apr_brigade_create 
apr_pool_t * 



Create a new bucket brigade. The
bucket brigade is originally empty. 

        The pool to
        associate with
        the brigade.
        Data is not
        allocated out
        of the pool,
        but a cleanup
        is registered.
        The bucket
        allocator to

        The empty bucket brigade 


I never actually had look into the code to verify and this looked to me
like the brigade isn't actually allocated from the pool, only registered
with it. Obviously, I should have looked :-( The data refers to buckets,
not brigade itself.

> [...]
> > When the request is dealt with, the brigade may get hosed, if it's
> > registered with the request pool. 
> But this does not happen *within* a call to ap_pass_brigade.  Consider
> a request filter that calls ap_pass_brigade(f->next, bb).  The request
> filter won't have a problem with bb->p == r->pool, and neither will
> a downstream connection filter that dispenses with bb immediately.
> However, that downstream connection filter will have a problem if 
> it keeps a pointer to bb for future use.  IMO this would be a bug 
> in the downstream connection filter, not the request filter.

I thought that the connection filters may delay delivery of a particular
request output data, particularly if the payload is small. In other
words, they may group all of it together to deliver in a single packet.
Wouldn't that mean that by the time the connection filter (on the
network level, for instance) gets to it, the brigade, including its
contents, may be gone?

Actually, when I was testing a lot of pipelined requests, that's exactly
the kind of problems I was experiencing. I guess I didn't something else
wrong somewhere...

> > pool), of course, your interpretation is again correct. They have to
> > have lifetime of at least connection. Same problem again - if they
> > don't, by the time the output filter gets to deal with them, they may be
> > gone. Again, instant segfault.
> OTOH, the bucket's setaside function takes a pool argument for just
> this reason, so in principle you should be able to use buckets created
> from the request pool if you really want to.  Perhaps the problem
> is really a wrong choice of bucket type, not the wrong initial pool?

Isn't the pool buckets' setaside also a NOOP?

I had some request pool buckets inside the brigade and on occasion a
pipelined requests would segfault Apache. On the other hand, when I
started using connection pool buckets, all worked out fine.

Anyhow, I just thought that the problem may be related to the pool
lifetime. But then again, maybe it isn't. Sorry if I wasted everyone's


View raw message