httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hamilton Vera <hamil...@i2.com.br>
Subject Re: [users@httpd] limiting connections per ip address in apache2 when under attack
Date Thu, 21 Jun 2007 13:35:23 GMT
You can try to use iptables, to limit the number of TCP connections

$IPTABLES -A INPUT -p TCP -i $WAN -s 0/0 --syn --dport 80 -m connlimit 
--connlimit-above 10 -j logdropdos

Or implement a Freebsd firewall with QoS, applying shapes to parallel TCP 
connections.

I hope this help.


On Thu, 21 Jun 2007, graham wrote:

> Hi,
>
> I've just become involved with a system running apache2.0.55 on ubuntu with 
> linux 2.6.17.
>
> The system is currently unable to run due to repeated downloads of a large 
> number of pdfs by systems located in China. These are hogging all sockets and 
> eventually causing apache to die (I'm appending more details below in case 
> I've got the wrong end of the stick). The ip address of these systems varies; 
> they are not a single block, although they are obviously working together 
> (different ip addresses will ask for sequentially related pdfs). Each ip 
> address will request multiple files in parallel.
>
> I'm told that the limit_ipconn module would solve my problem by limiting the 
> simultaneous accesses from any one ip address. There is no version of this 
> available for apache2 on ubuntu. I'm wondering if this is because similar 
> abilities have been built into apache2 itself, but haven't managed to find 
> any.
>
> Does anyone have any suggestions?
>
> Thanks
> Graham
> -----------------------------------------------
> Notes from log:
>
> The system is running ok, not at particularly heavy load (<1.0), and apache 
> is apparently running ok and not reporting errors [corrected later].
>
> Tailing the apache log file shows that the only accesses to the system are 
> GETs of pdfs from two chinese systems, 218.4.152.91 and 222.218.254.221, 
> which are obviously running the same software.
>
> These systems are trying to systematically work their way through downloading 
> all chinese pdfs. When a pdf is too large and the download times out, they 
> immediately try again (at any one moment each system is trying to download 3 
> or 4 pdfs).
>
> If I restart apache, I immediately get accesses from all over the place, 
> including the 2 chinese systems. Eventually the Chinese accesses capture all 
> the apache processes, and nothing else can get access.
>
> 'Solution' found for this: turn apache off for a few minutes. The chinese 
> systems went away, and all was fine again.
>
> One hour later ΒΆ
>
> The chinese systems, and the problems, returned. A little more data this 
> time.
>
> Once the chinese systems are established, netstat shows that they occupy most 
> sockets but are mostly in CLOSE_WAIT state. All other requests are stuck in 
> SYNC_RECV.
>
> After this continues for a while the apache processes gradually start to die 
> off with the following sequence:
>
> alert] (11): setuid: unable to change to uid: 33 (33 is www-data)
>
> [alert] Child 691 returned a Fatal error... Apache is exiting!
>
> [emerg] (43): couldn't grab the accept mutex
>
> semop: Invalid argument
>
>
>
>
>
> ---------------------------------------------------------------------
> The official User-To-User support forum of the Apache HTTP Server Project.
> See <URL:http://httpd.apache.org/userslist.html> for more info.
> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>  "   from the digest: users-digest-unsubscribe@httpd.apache.org
> For additional commands, e-mail: users-help@httpd.apache.org
>
>

Mime
View raw message