httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <s...@stason.org>
Subject Re: filtering huge request bodies (like 650MB files)
Date Thu, 11 Dec 2003 06:04:12 GMT
I'm debugging the issue. I have a good test case, having a response handler 
sending 1byte followed by rflush in a loop creates lots of buckets. I can see 
that each iteration allocates 40k. i.e. each new bucket brigade and its bucket 
demand 40k which won't be reused till the next request. This happens only if 
using a custom filter. I'm next going to move in and try to see whether the 
extra allocation comes from modperl or something else. I'll keep you posted.

sub handler {
     my $r = shift;

     $r->content_type('text/plain');

     my $chunk = "x";

     for (1..70) {
         my $before = $gtop->proc_mem($$)->size;

         $r->print($chunk);
         $r->rflush;

         my $after = $gtop->proc_mem($$)->size;
         warn sprintf "size : %-5s\n", GTop::size_string($after - $before),
     }

     Apache::OK;
}

This handler on its own requires just a few bytes. When feeding it to a simple 
pass-through unmodified filter it ends up allocating 70*40k = 2800kb. 
Obviously with 650MB file there are going to be many more buckets...

__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:stas@stason.org http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Mime
View raw message