httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ruediger Pluem <rpl...@apache.org>
Subject Re: mod_proxy race condition bug #37770
Date Sat, 17 May 2008 19:24:02 GMT


On 05/16/2008 11:21 PM, Adam Woodworth wrote:
> So I have some more information about this, but this time related to
> having keepalives OFF in mod_proxy.
> 
> I tried using the "SetEnv proxy-nokeepalive 1" option in httpd.conf,
> and it cleared up the proxy errors that I was having with an IIS
> backend server, and it may have decreased the proxy errors with Apache
> backend servers as well.
> 
> I tried again without the proxy-nokeepalive option, and the proxy
> errors increased.  Put it back, they decreased dramatically again.
> 
> Turns out that mod_proxy always sends "Connection: Keep-Alive", but
> with "proxy-nokeepalive 1", mod_proxy will always send "Connection:
> Close".
> 
> The backends in my case are all responding with "Connection: Close",
> so the backends aren't even allowing Keep-Alives, regardless of
> whether mod_proxy sends "Connection: Close" or "Connection:
> Keep-Alive".

So you have either set

force-proxy-request-1.0 and thus are doing HTTP/1.0 requests to your
backend or
your backend has keepalives disabled.

> 
> My theory is that mod_proxy is incorrectly using a socket pool even
> when the server responds with "Connection: Close" or when mod_proxy
> itself sends "Connection: Close".  Most of the time it probably
> realizes that the sockets in the pool are actually closed, but
> sometimes not and it tries to use a dead socket.

This theory is wrong. See line 1498 (1477 in 2.2.x) in mod_proxy_http.c:

             backend->close += ap_proxy_liststr(apr_table_get(r->headers_out,
                                                              "Connection"),
                                               "close");

Thus the socket to the backend gets closed if the backend sents
Connection: Close.

> 
> What should be happening is that mod_proxy needs to NOT use a socket
> pool if "Connection: Close" is in use.

It still manages the connections in a pool this case, but it does not
reuse the sockets / TCP connections in this case for the next backend
connection.

In conjunction with the last patch attached to 37770 the following patch
should finally prevent this problem at the cost of a performance penalty:

Index: modules/proxy/proxy_util.c
===================================================================
--- modules/proxy/proxy_util.c  (Revision 657321)
+++ modules/proxy/proxy_util.c  (Arbeitskopie)
@@ -2168,6 +2168,11 @@
      else {
          conn->addr = worker->cp->addr;
      }
+    /* Close a possible existing socket if we are told to do so */
+    if (conn->close) {
+        socket_cleanup(conn);
+        conn->close = 0;
+    }

      if (err != APR_SUCCESS) {
          return ap_proxyerror(r, HTTP_BAD_GATEWAY,
Index: modules/proxy/mod_proxy_http.c
===================================================================
--- modules/proxy/mod_proxy_http.c      (Revision 657321)
+++ modules/proxy/mod_proxy_http.c      (Arbeitskopie)
@@ -1876,6 +1876,20 @@
          ap_proxy_ssl_connection_cleanup(backend, r);
      }

+    /*
+     * In the case that we are handling a reverse proxy connection and this
+     * is not a request that is coming over an already kept alive connection
+     * with the client, do NOT reuse the connection to the backend, because
+     * we cannot forward a failure to the client in this case as the client
+     * does NOT expects this in this situation.
+     * Yes, this creates a performance penalty.
+     * XXX: Make this behaviour configurable: Either via an environment
+     * variable or via a worker configuration directive.
+     */
+    if ((r->proxyreq == PROXYREQ_REVERSE) && (!c->keepalives)) {
+        backend->close = 1;
+    }
+
      /* Step One: Determine Who To Connect To */
      if ((status = ap_proxy_determine_connection(p, r, conf, worker, backend,
                                                  uri, &url, proxyname,

Therefore it should be configurable. Any preferences whether to do this via an
environment variable or via a worker specific property?
Remark: Don't get confused if you still see the "error reading status line" messages
in the logs. This doesn't mean that the client gets an error message. You can cross
check this if you set the error level to debug and see the following messages in the
log:

"proxy: Closing connection to client because reading from backend server %s failed. Number
of keepalives"


Regards

RĂ¼diger


Mime
View raw message