httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cliff Woolley <>
Subject Re: filtering huge request bodies (like 650MB files)
Date Wed, 10 Dec 2003 22:57:40 GMT
On Wed, 10 Dec 2003, Stas Bekman wrote:

> Chris is trying to filter a 650MB file coming in through a proxy. Obviously he
> sees that httpd-2.0 is allocating > 650MB of memory, since each bucket will
> use the request's pool memory and won't free it untill after the request is
> over.

Whoa.  Obviously?  It is NOT supposed to do that.  Buckets do not use pool
memory for that very reason (well, that's one of the two or three big

> could theoretically reuse that memory for the next brigade.

Which is exactly what is supposed to happen.

> Obviously it's not how things work at the moment, as the memory is never
> freed (which could probably be dealt with), but the real problem is that
> no data will leave the server out before it was completely read in.

Yes, that would be the real problem.  So somewhere there is a filter (or
maybe the proxy itself) buffering the entire data stream before sending
it.  That is a bug.


View raw message