httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <s...@stason.org>
Subject Re: filtering huge request bodies (like 650MB files)
Date Thu, 11 Dec 2003 06:44:57 GMT
Stas Bekman wrote:
> I'm debugging the issue. I have a good test case, having a response 
> handler sending 1byte followed by rflush in a loop creates lots of 
> buckets. I can see that each iteration allocates 40k. i.e. each new 
> bucket brigade and its bucket demand 40k which won't be reused till the 
> next request. This happens only if using a custom filter. I'm next going 
> to move in and try to see whether the extra allocation comes from 
> modperl or something else. I'll keep you posted.

I now know what the problem is. It is not a problem in httpd or its filters, 
but mod_perl, allocated filter struct from the pool. With many bucket brigades 
there were many filter invocations during the same request, resulting in 
multiple memory allocation. So I have to move to the good-old malloc/free to 
solve this problem.

Though it looks like I've found a problem in apr_pcalloc.

modperl_filter_t *filter = apr_pcalloc(p, sizeof(*filter));

was constantly allocating 40k of memory, whereas sizeof(*filter) == 16464

replacing apr_pcalloc with apr_palloc reduced the memory allocations to 16k.

Could it be a bug in APR_ALIGN_DEFAULT? apr_pcalloc calls APR_ALIGN_DEFAULT 
and then it calls apr_palloc which calls APR_ALIGN_DEFAULT again, and probably 
doubling the memory usage.

__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:stas@stason.org http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Mime
View raw message