httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Graham Dumpleton <graham.dumple...@gmail.com>
Subject Re: Mitigating the Slowloris DoS attack
Date Tue, 23 Jun 2009 04:07:09 GMT
2009/6/23 Weibin Yao <nbubingo@gmail.com>:
> William A. Rowe, Jr. at 2009-6-23 2:00 wrote:
>>
>> Andreas Krennmair wrote:
>>
>>>
>>> * Guenter Knauf <fuankg@apache.org> [2009-06-22 04:30]:
>>>
>>>>
>>>> wouldnt limiting the number of simultanous connections from one IP
>>>> already help? F.e. something like:
>>>> http://gpl.net.ua/modipcount/downloads.html
>>>>
>>>
>>> Not only would this be futile against the Slowloris attack (imagine n
>>> connections from n hosts instead of n connections from 1 host), it would
>>> also potentially lock out groups of people behind the same NAT gateway.
>>>
>>
>> FWIW mod_remoteip can be used to partially mitigate the weakness of this
>> class of solutions.
>>
>> However, it only works for known, trusted proxies, and can only be safely
>> used for those with public IP's.  Where the same 10.0.0.5 on your private
>> NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the
>> issues like Allow from 10.0.0.0/8 become painfully obvious.  I haven't
>> found a good solution, but mod_remoteip still needs one, eventually.
>>
>>
>
> I have an idea to mitigate the problem: put the Nginx as a reverse proxy
> server in the front of apache.

Although your comment is perhaps heresy here, it does highlight one of
the things that nginx is good at, even if you don't use it to serve
static files with Apache handling just the dynamic web application.
That is, that it can isolate Apache from slow clients, whether that be
an attack as in this case, or just normal users using slow networks.
The proxy module of nginx in the way it will buffer up request content
to disk before actually sending the request onto the backend also
helps by not tying up Apache's limited request handler threads until
the request content is completely available, although, nginx does have
an upper limit on this at some point and will still stream when the
post content is large enough.

The nginx server works better at avoiding problems with slow clients
because it is event driven rather than threaded and so can handle more
connections without needing to tie up expensive threads.
Unfortunately, trying to make socket accept handling in Apache be
event driven and for requests to only be handed off to a thread for
processing when ready can introduce its own problems. This is because
an event driven system can tend to greedily accept new socket
connections. In a multiprocess server configuration this can mean that
a single process may accept more than its fair share of socket
connections and by the time it has read the initial request headers,
may not have enough available threads to handle the requests. In the
mean time, another server process, which did not get in quick enough
to accept some of the connections could be sitting their idle. How you
mediate between multiple servers to avoid this sort of problem would
be tricky if it can be done.

Anyway, now for a hair brained suggestion that could bring some of
this nginx goodness to Apache. Although no doubt it would have various
limitations which to solve properly and be integrated seamlessly into
Apache would require some changes in the core.

The idea here is to have an Apache module which spawns off its own
child process which implements a very small lightweight event driven
proxy that listens on the real listener sockets you want to expose.
This processes sole job would then be to handle reading in the request
headers, and perhaps optionally buffering up request content, and then
squirt it across to real Apache child server processes to be handled
when it has all the information it needs. To that end, it wouldn't be
a general purpose proxy but quite customised. As such, it could even
perhaps be made more efficient than nginx in the way it is used to
protect Apache from such things as slow clients.

For HTTP at least, this probably wouldn't be too hard to do and
wouldn't likely need any changes to the core. You could even have
whether you use it be optional to the extent of it only applying to
certain virtual hosts. Where it does though all get a lot harder is
virtual hosts which use HTTPS.

So, that is my crazy thought for the day and am sure that it will be
derided for what is is worth.

I still find the thought interesting though and it falls into that
class of things I find interesting due to the challenge it presents.
:-)

Graham

Mime
View raw message