httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ben Reser <>
Subject Re: URL scanning by bots
Date Fri, 03 May 2013 04:18:23 GMT
On Wed, May 1, 2013 at 7:16 AM, André Warnier <> wrote:
> If it tries just one URL per server, and walks off if the response takes
> longer than some pre-determined value, then it all depends on what this
> value is.
> If the value is very small, then it will miss a larger proportion of the
> potential candidates. If the value is larger, then it miss less candidate
> servers, but it will be able to scan comparatively less servers within the
> same period of time.

The question becomes can they still achieve the number of hacked
servers they require in the timeframe they require while ignoring some
proportion of the servers on the Internet.  I think the answer to that
question is clearly yes.  The reason for that is the number of poorly
updated and secured servers is much higher than the number of secure

If you want the scanning to stop a far more productive effort would be
to try and get the people running vulnerable systems to secure them.
Until this happens there will always be incentive to scan, even if
you're  tar pitting them on some systems.

View raw message