httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ruediger Pluem <rpl...@apache.org>
Subject Re: Problem with file descriptor handling in httpd 2.3.1
Date Sun, 04 Jan 2009 14:04:31 GMT


On 01/04/2009 12:49 AM, Rainer Jung wrote:
> On 04.01.2009 00:36, Paul Querna wrote:
>> Rainer Jung wrote:
>>> During testing 2.3.1 I noticed a lot of errors of type EMFILE: "Too
>>> many open files". I used strace and the problem looks like this:
>>>
>>> - The test case is using ab with HTTP keep alive, concurrency 20 and a
>>> small file, so doing about 2000 requests per second.
>>> MaxKeepAliveRequests=100 (Default)
>>>
>>> - the file leading to EMFILE is the static content file, which can be
>>> observed to be open more than 1000 times in parallel although ab
>>> concurrency is only 20
>>>
>>> - From looking at the code it seems the file is closed during a
>>> cleanup function associated to the request pool, which is triggered by
>>> an EOR bucket
>>>
>>> Now what happens under KeepAlive is that the content files are kept
>>> open longer than the handling of the request, more precisely until the
>>> closing of the connection. So when MaxKeepAliveRequests*Concurrency >
>>> MaxNumberOfFDs we run out of file descriptors.
>>>
>>> I observed the behaviour with 2.3.1 on Linux (SLES10 64Bit) with
>>> Event, Worker and Prefork. I didn't yet have the time to retest with
>>> 2.2.
>>
>> It should only happen in 2.3.x/trunk because the EOR bucket is a new
>> feature to let MPMs do async writes once the handler has finished
>> running.
>>
>> And yes, this sounds like a nasty bug.
> 
> I verified I can't reproduce with the same platform and 2.2.11.
> 
> Not sure I understand the EOR asynchronicity good enough to analyze the
> root cause.

Can you try the following patch please?

Index: server/core_filters.c
===================================================================
--- server/core_filters.c       (Revision 731238)
+++ server/core_filters.c       (Arbeitskopie)
@@ -367,6 +367,7 @@

 #define THRESHOLD_MIN_WRITE 4096
 #define THRESHOLD_MAX_BUFFER 65536
+#define MAX_REQUESTS_QUEUED 10

 /* Optional function coming from mod_logio, used for logging of output
  * traffic
@@ -381,6 +382,7 @@
     apr_bucket_brigade *bb;
     apr_bucket *bucket, *next;
     apr_size_t bytes_in_brigade, non_file_bytes_in_brigade;
+    int requests;

     /* Fail quickly if the connection has already been aborted. */
     if (c->aborted) {
@@ -466,6 +468,7 @@

     bytes_in_brigade = 0;
     non_file_bytes_in_brigade = 0;
+    requests = 0;
     for (bucket = APR_BRIGADE_FIRST(bb); bucket != APR_BRIGADE_SENTINEL(bb);
          bucket = next) {
         next = APR_BUCKET_NEXT(bucket);
@@ -501,11 +504,22 @@
                 non_file_bytes_in_brigade += bucket->length;
             }
         }
+        else if (bucket->type == &ap_bucket_type_eor) {
+            /*
+             * Count the number of requests still queued in the brigade.
+             * Pipelining of a high number of small files can cause
+             * a high number of open file descriptors, which if it happens
+             * on many threads in parallel can cause us to hit the OS limits.
+             */
+            requests++;
+        }
     }

-    if (non_file_bytes_in_brigade >= THRESHOLD_MAX_BUFFER) {
+    if ((non_file_bytes_in_brigade >= THRESHOLD_MAX_BUFFER)
+        || (requests > MAX_REQUESTS_QUEUED)) {
         /* ### Writing the entire brigade may be excessive; we really just
-         * ### need to send enough data to be under THRESHOLD_MAX_BUFFER.
+         * ### need to send enough data to be under THRESHOLD_MAX_BUFFER or
+         * ### under MAX_REQUESTS_QUEUED
          */
         apr_status_t rv = send_brigade_blocking(net->client_socket, bb,
                                                 &(ctx->bytes_written), c);


This is still some sort of a hack, but maybe helpful to understand if this is
the problem.

Regards

RĂ¼diger


Mime
View raw message