httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <>
Subject Re: filtering huge request bodies (like 650MB files)
Date Thu, 11 Dec 2003 06:04:12 GMT
I'm debugging the issue. I have a good test case, having a response handler 
sending 1byte followed by rflush in a loop creates lots of buckets. I can see 
that each iteration allocates 40k. i.e. each new bucket brigade and its bucket 
demand 40k which won't be reused till the next request. This happens only if 
using a custom filter. I'm next going to move in and try to see whether the 
extra allocation comes from modperl or something else. I'll keep you posted.

sub handler {
     my $r = shift;


     my $chunk = "x";

     for (1..70) {
         my $before = $gtop->proc_mem($$)->size;


         my $after = $gtop->proc_mem($$)->size;
         warn sprintf "size : %-5s\n", GTop::size_string($after - $before),


This handler on its own requires just a few bytes. When feeding it to a simple 
pass-through unmodified filter it ends up allocating 70*40k = 2800kb. 
Obviously with 650MB file there are going to be many more buckets...

Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker     mod_perl Guide --->

View raw message