httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stas Bekman <>
Subject Re: filtering huge request bodies (like 650MB files)
Date Wed, 10 Dec 2003 23:18:44 GMT
Cliff Woolley wrote:
> On Wed, 10 Dec 2003, Stas Bekman wrote:
>>Chris is trying to filter a 650MB file coming in through a proxy. Obviously he
>>sees that httpd-2.0 is allocating > 650MB of memory, since each bucket will
>>use the request's pool memory and won't free it untill after the request is
> Whoa.  Obviously?  It is NOT supposed to do that.  Buckets do not use pool
> memory for that very reason (well, that's one of the two or three big
> reasons).
>>could theoretically reuse that memory for the next brigade.
> Which is exactly what is supposed to happen.

Ah, cool, I thought that pools are used everywhere. Thanks for correcting me, 

>>Obviously it's not how things work at the moment, as the memory is never
>>freed (which could probably be dealt with), but the real problem is that
>>no data will leave the server out before it was completely read in.
> Yes, that would be the real problem.  So somewhere there is a filter (or
> maybe the proxy itself) buffering the entire data stream before sending
> it.  That is a bug.

Are you saying that if I POST N MBytes of data to the server and just have the 
server send it back to me, it won't grow by that N MBytes of memory for the 
duration of that request? Can you pipe the data out as it comes in? I thought 
that you must read the data in before you can send it out (at least if it's 
the same client who sends and receives the data).

p.s. obviously I should stop using the word 'obviously' ;)

Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker     mod_perl Guide --->

View raw message