httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juan Rivera <>
Subject [PATCH] Avoid unnecessary brigade splits on core input and output filters. WAS: EOS or FLUSH buckets
Date Tue, 10 Jun 2003 20:59:28 GMT
I'm seen this problem with a SOCKS protocol module I wrote.

I'm including a patch that fixes this problem. It does what I mentioned
below. In the input filter, it moves the buckets rather than creating a new
brigade and then concatenate. In the output filter it splits the brigade
after a flush bucket only if there are buckets after the flush.


-----Original Message-----
From: [] 
Sent: Tuesday, June 10, 2003 3:41 PM
Subject: Re: EOS or FLUSH buckets

Juan Rivera wrote:

> Right, my module leaks memory because the core input and output filters 
> split the bucket brigades. So it keeps creating more and more bucket 
> brigades that are not released until the connection is gone.

When you see this, are we talking about a lot of HTTP requests pipelined on
single connection, or a single HTTP request that lasts a long time?

> First of all, I think the split in the core input filter (READBYTES) 
> should be optimized because all it is doing is splitting the brigade to 
> concatenate it into another brigade. Wouldn't be more efficient to do a 
> "move buckets from brigade ctx->b to b" and avoid creating a temporary 
> brigade?
> So for the output side, when I send a flush, it splits the brigade. If 
> the flush is the last bucket, this might not be necessary, what do you 
> think?

I'll defer these two questions to our Bucketmeister and/or efficiency
(Cliff? Brian?)

> On the topic of EOS, I think that if the last bucket is an EOS and is 
> not a keep alive connection it should not hold the data but it currently 
> does.

Maybe.  But if it's not a keepalive connection, we should be sending a FLUSH

bucket within microseconds, no?  OK, maybe that path could be optimized.
we'd have to be careful because keepalive connections are very common.  We 
wouldn't want to penalize the hot path by optimizing for the less common


View raw message