tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From André Warnier>
Subject Re: Tomcat access log reveals hack attempt: "HEAD /manager/html HTTP/1.0" 404
Date Wed, 17 Apr 2013 19:10:46 GMT
Christopher Schultz wrote:
> Hash: SHA256
> André,
> On 4/17/13 1:27 PM, André Warnier wrote:
>> Leo Donahue - RDSA IT wrote:
>>>> -----Original Message----- From: André Warnier
>>>> [] Subject: Re: Tomcat access log reveals
>>>> hack attempt: "HEAD /manager/html HTTP/1.0" 404
>>>> That's the idea.  That is one reason why I brought this
>>>> discussion here : to check if, if the default factory setting
>>>> was for example 1000 ms delay for each 404 answer, could anyone
>>>> think of a severe detrimental side-effect ?
>>> What if I send 10,000 requests to your server for some file that
>>> is not there?
>> Then you will just have to wait 10,000+ seconds in total before you
>> get all your corresponding 404 responses. Which is exactly the
>> point.
> Sounds like a DOS to me. What you really want to do is detect an
> attacker (or annoying client) and block them without having to use
> your own resources. Maintaining a socket connection for an extra
> second you don't otherwise have to do is using a resource, even if the
> CPU is entirely idle, and even if you can return the
> request-processing thread to the thread-pool before you wait that 1
> second to respond.
> What I describe above is a great case for using fail2ban (not sure if
> it exists in the Windows world): you watch a log file (e.g. access
> log) and lots of 404s coming from a single place and then ban them at
> the firewall-level. That's much more efficient than sleeping for a
> second for each 404.
> I'm sure you'll lock-out most web spiders pretty quickly, though ;)

like the other people here who have pointed out better ways to protect their servers,
*you are right, and I agree with all you are saying*.

But one point that I am trying to make, in different ways because obviously it isn't 
getting through, so I'll try again, is the following :

All the better ways and software and configurations that you point out are effective, and

better than just delaying 404 responses by 1 second.
*But they all require expertise, time and resources* to set up.

Which means that out of the 600 million webservers which are currently on-line, only a 
small proportion will ever implement them.

Let's say that 20% of those webservers (100 million) implement one of these professional 
schemes.  That leaves 500 million webservers which don't, among which say 1% (5 million) 
are really open to bot attacks via such URLs.
So right now, with a reasonable number of bots (100), I can scan 100,000 servers in 8 
hours, and collect 1000 future break-in candidates and future bots for my net.
And this, *no matter how well the "professional" sites are configured*.

That is worth it, so I will run my 100 bots, and bother 100,000 servers (among which 
yours, which is well-protected and doesn't care).
And then I will go to bed and when I wake up tomorrow morning, my list of 1000 real 
targets will be ready to start probing further.

On the other hand, if 50% of the 500 million non-professional webservers, *because it is 
simple and cheap and doesn't require any expertise or time or resources*, do delay each 
404 by 1 s, then it takes my 100 bots 250 hours (12 days) to collect these 1000 targets.
And each day of these 12 days, instead 100,000 servers being bothered, 90,000 will have 
been left alone, because I can only scan 10000 per 24h.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message