httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject Re: worth fixing "read headers forever" issue?
Date Thu, 01 Jan 1998 06:36:32 GMT


On Wed, 31 Dec 1997, Marc Slemko wrote:

> What would be cool to see (and I would like to play with if I worked at a
> large web hosting company...) is changes that make it very difficult for
> hits on any one part (be it userdir, virtual domain, etc.) of the docspace
> to impact other parts while still using the same pool of servers.

I think we're missing one API phase to do this perfectly as a module...
but here's a solution that doesn't require much of a performance loss.

Let h(r) be a hash function on a request r, such that h(r1) == h(r2)
for two requests r1 and r2 to the "same part of the server".  Let H be a
table of unsigned chars in shared memory, indexed by the hash function.
For best effect we want to keep the table within the L2 cache, the size
of the table will depend on h().  Note that unsigned chars work well
for servers with less than 255 children, which is probably a common
enough case.

When a request is received, the child increments H[h(r)] atomically.
If the value is above a threshold the child goes to sleep for K seconds,
if above a higher threshold it returns a 503 error.  When done with the
request it decrements H[h(r)] atomically.

There's a problem with kill -9 and core dumps and other unexpected exits.

Using gperf or other perfect hash generators you could generate a perfect
hash for url-spaces.  But you're probably also interested in doing this
for incoming IP addresses to prevent too many simultaneous connections,
but it should be possible to make the hash table large enough to deal
with this too.

Actually, I bet you can do this in 1.3 with post_read_request.  Shouldn't
be a difficult module to write, except for the lack of shared memory
abstraction.

Dean


Mime
View raw message