httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe Jr." <>
Subject Re: URL scanning by bots
Date Fri, 03 May 2013 03:36:02 GMT
On Fri, 03 May 2013 01:53:01 +0200
Guenter Knauf <> wrote:

> On 02.05.2013 10:22, André Warnier wrote:
> >
> > But I am a bit at a loss as to what to do next.  I could easily
> > enough install such a change on my own servers (they are all
> > running mod_perl). But then, if it shows that the bots do slow down
> > on my servers or avoid them, it still doesn't quite provide enough
> > evidence to prove that this would benefit the Internet at large,
> > does it ?

> No. But you wrote above that its not your intention to protect
> yourself and your servers, but more that you want to cure the world
> and enable to run webservers by 'folks who dont know what they do',
> or???

I like this meme.  The browser world learned long ago to protect their
users from themselves, to the point where the Goog went overboard and
set the do-not-track as a default option for a time.

We seem to have a far less forgiving attitude.  True that most of our
users are competent professionals or dedicated hobbyists who want to
run servers, not our home PC's/netbooks/tablets/phones.  But, all are
not equally experienced; if you have either had to hand off admin of
a significant farm or pick up a poorly set up cluster of servers, you
have to consider that even the most skilled admins would appreciate
that our defaults are sane.

This may not be the right proposal (I'm thinking a combo of honeypots
and iptables on-the-fly mods would serve the same purpose, better),
but let's not dismiss out of hand what seems simple or obvious to any
experienced sysadmin, only to have journeymen fall into whatever traps
we lay for them as defaults.

View raw message