httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <>
Subject Re: filtering huge request bodies (like 650MB files)
Date Wed, 10 Dec 2003 23:23:14 GMT
At 04:57 PM 12/10/2003, Cliff Woolley wrote:
>On Wed, 10 Dec 2003, Stas Bekman wrote:
>> Obviously it's not how things work at the moment, as the memory is never
>> freed (which could probably be dealt with), but the real problem is that
>> no data will leave the server out before it was completely read in.
>Yes, that would be the real problem.  So somewhere there is a filter (or
>maybe the proxy itself) buffering the entire data stream before sending
>it.  That is a bug.

It's NOT the proxy - I've been through it many times - and AFAICT we have
a simple leak in that we don't reuse the individual pool buckets, so memory
creeps up over time.  It isn't even the end of the world, until someone at
apachecon pointed out continous HTML proxied streams (e.g. video) really
gobble memory, even at 8kb/min+ this isn't acceptable.

So it's not the proxy or the core output filter.  The bug lies in the Filter itself.
Is it Chrises' own filter or one of ours?  whichever it is, it would be nice to
get this fixed.  This is why we aught to not flip subject headers, Stas, I'm
really too short on time to go fumbling for the original posts.  Need to know
which filters are inserted, and therefore possibly suspect.


View raw message