httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Querna <c...@force-elite.com>
Subject Re: Problem with file descriptor handling in httpd 2.3.1
Date Sat, 03 Jan 2009 23:36:29 GMT
Rainer Jung wrote:
> During testing 2.3.1 I noticed a lot of errors of type EMFILE: "Too many 
> open files". I used strace and the problem looks like this:
> 
> - The test case is using ab with HTTP keep alive, concurrency 20 and a 
> small file, so doing about 2000 requests per second. 
> MaxKeepAliveRequests=100 (Default)
> 
> - the file leading to EMFILE is the static content file, which can be 
> observed to be open more than 1000 times in parallel although ab 
> concurrency is only 20
> 
> - From looking at the code it seems the file is closed during a cleanup 
> function associated to the request pool, which is triggered by an EOR 
> bucket
> 
> Now what happens under KeepAlive is that the content files are kept open 
> longer than the handling of the request, more precisely until the 
> closing of the connection. So when  MaxKeepAliveRequests*Concurrency > 
> MaxNumberOfFDs we run out of file descriptors.
> 
> I observed the behaviour with 2.3.1 on Linux (SLES10 64Bit) with Event, 
> Worker and Prefork. I didn't yet have the time to retest with 2.2.

It should only happen in 2.3.x/trunk because the EOR bucket is a new 
feature to let MPMs do async writes once the handler has finished running.

And yes, this sounds like a nasty bug.

-Paul


Mime
View raw message