httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Behlendorf <>
Subject Re: hard_timeout()
Date Thu, 24 Oct 1996 02:47:43 GMT
On Wed, 23 Oct 1996, Ben Laurie wrote:
> If a connection is kept alive, hard_timeout() uses the keepalive timeout,
> instead of the server timeout. This surely isn't right, since the keepalive
> timeout is for the gaps between requests, and hard_timeout() is used during
> request processing?

There's no distinction in the code between inter-request timeouts and
during-request timeouts, apparently.  The hard_timeout seems to apply to to the
entire length of time between first connection (or end of last connection), and
end of request.  Try it out with a "Timeout" directive of 10 seconds.  With the
default, then of 1200 seconds, the first request has 1200 seconds to complete
its request - that's silly.  That could easily lead to a denial of service
attack, too.  The "fix" here should be to have another internal value for the
amount of time the server should wait for the complete request to take place,
say 10 seconds, and the inter-request keepalive timeout should be what is
configured via KeepAliveTimeout today.  In other words: 

X = # of seconds to complete request
  (Right now the default is 1200 seconds, or the value of "Timeout")

Y = # of seconds between keep-alive requests
  (Right now the default is 10 seconds)

client connects and begins request
client has X seconds to complete request    
client gets a response
connection kept alive
client has Y seconds before it must begin its next request
client begins request
client has X seconds to complete request

If we want to keep things relatively simple, we can simply have the heuristic
around what Ben is noting (http_main.c, line 458 in current snapshot) as

    if (r->connection->keptalive) 
       alarm (r->server->keep_alive_timeout + r->server->readtimeout); 
       alarm (r->server->readtimeout); 

...along with a new server variable "readtimeout", which should default to
something like 20 seconds, and should probably be a configurable option.
Right now the main problem is that the timeout to read input and the timeout
for output is the same variable.  Dis es no good.

If people think inventing a new variable is a good idea, let me know and I'll
implement it.  



View raw message