httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Issac Goldstand <>
Subject Re: URL scanning by bots
Date Wed, 01 May 2013 08:08:01 GMT
On 30/04/2013 21:38, Ben Laurie wrote:
> On 30 April 2013 11:14, Reindl Harald <> wrote:
>> Am 30.04.2013 12:03, schrieb André Warnier:
>>> As a general idea thus, anything which impacts the delay to obtain a 404 response,
>>> impact these bots much more than it impacts legitimate users/clients.
>>> How much ?
>>> Let us imagine for a moment that this suggestion is implemented in the Apache
>>> and is enabled in the default configuration.  And let's imagine that after a
while, 20% of
>>> the Apache webservers deployed on the Internet have this feature enabled, and
are now
>>> delaying any 404 response by an average of 1000 ms
>> which is a invitation for a DDOS-attack because it would
>> make it easier to use every available worker and by the
>> delay at the same time active iptables-rate-controls
>> get useless because you need fewer connections for the
>> same damage
>> no - this idea is very very bad and if you ever saw a
>> DDOS-attack from 10 thousands of ip-addresses on a
>> machine you maintain you would not consider anything
>> which makes responses slower because it is the wrong
>> direction
> There's no reason to make this a DoS vector - clearly you can queue
> all the delayed responses in a single process and not tie up available
> processes. And if that process gets full, you just drop them on the
> floor.

1) You're still keeping TCP connections in your kernel which on an
incredibly busy server are an important resource.
2) What do you mean "drop them on the floor"?  You're going to let your
real users suffer by getting dropped connections instead of a 404?  At
best, they'll just try again, to the same URL, producing another 404
that'll hang around for a long time.  At worst, you're pissing off real

View raw message