httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <s...@stason.org>
Subject filtering huge request bodies (like 650MB files)
Date Wed, 10 Dec 2003 22:53:30 GMT
Chris is trying to filter a 650MB file coming in through a proxy. Obviously he 
sees that httpd-2.0 is allocating > 650MB of memory, since each bucket will 
use the request's pool memory and won't free it untill after the request is 
over. Now even if his machine was able to deal with one such request, what if 
there are several of those? What's the solution in this case? How can we 
pipeline the memory allocation and release?

Ideally the core_in filter would allocate the buckets for a single brigade, 
pass them through the filters, core_out would splash them out and then core_in 
could theoretically reuse that memory for the next brigade. Obviously it's not 
how things work at the moment, as the memory is never freed (which could 
probably be dealt with), but the real problem is that no data will leave the 
server out before it was completely read in. So httpd always requires at the 
least the amount of memory that is needed to allocate to store all the 
incoming data. Which is usually multiplied by at least 2, if there is any 
transformation applied on that incoming data.

I'm not sure what to advise to Chris, who as a user rightfully thinks that 
it's a memory leak.

__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:stas@stason.org http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Mime
View raw message