httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Micha Lenk <mi...@lenk.info>
Subject Re: URL scanning by bots
Date Sat, 04 May 2013 18:45:04 GMT
Hi,

Am 03.05.2013 11:27, schrieb Dirk-Willem van Gulik:
> FWIIW - the same sentiments where expressed when 'greylisting[1]' in
> SMTP came in vogue. For small relays (speaking just from personal
> experience and from the vantage of my own private tiny MTA's) that
> has however not been the case. Greylisting did dampen things
> significantly - and the effect lasts to this day.

The main difference I see here is, that a SMTP server that uses
greylisting really can close the client connection almost immediately
with keeping minimal state, usually on cheap disk. So, until the client
retries, neither the kernel nor any processes have to deal with the
greylisting during the delay period.

In HTTP this is totally different. You can't just return a temporary
error code and assume that web browser will retry some reasonable
moments later. For this reason you would have to delay the real HTTP
response. And this has a substantial resource usage impact, as you have
to maintain state across all operating system layers. The network stack
needs to maintain the TCP connection open, the kernel needs to maintain
an open socket for the server process, the server process needs to
maintain an (some kind of) active HTTP request -- for every single
delayed request. These resources would just wait for the delay timer to
expire, so essentially hanging around without doing anything useful, and
without changing any outcome of the actual HTTP transaction. As others
already pointed out, this opens the doors for denial of service attacks
by excessive resource usage. From a security point of view, you
definitely don't want such HTTP response delays.

Regards,
Micha

Mime
View raw message