httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tony Stevenson <t...@pc-tony.com>
Subject Re: [users@httpd] setting MaxClients locally?
Date Fri, 08 Jun 2007 10:20:43 GMT


Allen Pulsifer wrote:
> Hi Martin,
>
> You could run two completely separate instances of httpd, one listening on
> port 80 with MaxClients=100 serving your normal content, and the other
> listening at port 8000 with MaxClients=20 serving your large PDF's.  This
> would require two completely separate http.conf files (for example,
> http.conf and http-pdf.conf), and launching the second instance using the
> httpd -f option.  You would also have to change all links to the PDF's from
> www.yoursite.com/file.pdf to www.yoursite.com:8000/file.pdf.  Alternately,
> you could assign the second server instance to a different IP address
> instead of a different port, configure DNS to make this IP address answer to
> a subdomain like pdfs.yoursite.com, and then change the PDF links from
> www.yoursite.com/file.pdf to pdfs.yoursite.com/file.pdf.
>   

An alternative to changing all your links, could be for you to use 
reverse proxy.

i.e. 

<Location /pdf>
    ProxyPass / http://localhost:8080/
    ProxyPassReverse / http://localhost:8080/
</Location>

This way you could ensure that the change is transparent to the end 
user, and they remain on your server under your control.
However doing it this way you will only limit connections from the front 
end server to the back end server.

This would be global to all users attempting to 'suck down' all your 
files, but it will stop the server from being flattened in the process.

There is no easy way of doing what you want to do, directly that is, 
without farming the work off to another httpd process, of some kind.



> Another option might be to move the PDF files to a hosting service such as
> Amazon S3, http://www.amazon.com/S3-AWS-home-page-Money/.  Files uploaded to
> Amazon S3 can be made publicly available at a URL such as
> http://s3.amazonaws.com/your-bucket-name/file.pdf or
> http://your-bucket-name.s3.amazonaws.com/file.pdf, or using DNS tricks, at a
> virtual host such as pdfs.yoursite.com/file.pdf or
> www.yourpdfsite.com/file.pdf.  See
> http://docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html.
> The cost of S3 is $0.18 per GB of data transfer, plus storage and request
> charges.
>
> Allen
>
>   
>> -----Original Message-----
>> From: Martijn [mailto:sweetwatergeek@googlemail.com] 
>> Sent: Friday, June 08, 2007 5:27 AM
>> To: users@httpd.apache.org
>> Subject: [users@httpd] setting MaxClients locally?
>>
>>
>> Hello.
>>
>> A bit of a long shot... On my website, there is a directory 
>> containing a relatively large number of big files (PDFs). 
>> Every now and then, there is a user that sees them, gets very 
>> excited and downloads them all within a short period of time 
>> (probably using FF's DownThemAll plugin or something 
>> similar). Fair enough, that's what they're for, but, 
>> especially if the user is on a slow connection, this will 
>> make them use all available child processes, causing the site 
>> to be unreachable by others, which leads to swapping and, 
>> eventually, crashing.
>>
>> I'm looking for a quick, on the fly way to prevent this from 
>> happening (in the long run, the whole server code will be 
>> re-written, so I should be able to use some module - or write 
>> one myself). I googled a bit about limiting the number of 
>> child processes per IP address, but that seems to be a tricky 
>> business. Then I was thinking, is there perhaps a nice way of 
>> setting MaxClients 'locally' to a small number, so that no 
>> more than, say, 10 or 20 child processes will be dealing with 
>> requests from a certain directory, while the other processes 
>> will happily be dealing with the rest? E.g. (non-working 
>> example!) something like
>>
>> MaxClients 100
>>
>> <Directory /pdf>
>> LocalMaxClients 20
>> </Directory>
>>
>> I know this won't be the nicest solution - it would still 
>> prevent other, non-greedy users to download the PDFs while 
>> the greedy person is leaching the site - but something like 
>> this would make my life a lot easier for the time being. Oh, 
>> and perhaps I should add that I don't really care about bandwidth.
>>
>> Any ideas?
>>
>> Martijn.
>>
>> ---------------------------------------------------------------------
>> The official User-To-User support forum of the Apache HTTP 
>> Server Project. See 
>> <URL:http://httpd.apache.org/userslist.html> for more info. 
>> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>>    "   from the digest: users-digest-unsubscribe@httpd.apache.org
>> For additional commands, e-mail: users-help@httpd.apache.org
>>
>>     
>
>
> ---------------------------------------------------------------------
> The official User-To-User support forum of the Apache HTTP Server Project.
> See <URL:http://httpd.apache.org/userslist.html> for more info.
> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>    "   from the digest: users-digest-unsubscribe@httpd.apache.org
> For additional commands, e-mail: users-help@httpd.apache.org
>
>   

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Mime
View raw message