httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yann Ylavic <ylavic....@gmail.com>
Subject Re: mod_proxy, oooled backend connections and the keep-alive race condition
Date Tue, 01 Oct 2013 13:39:27 GMT
Hi devs,

I was about to propose the following patch to allow mod_proxy_http to flush
all data received from the client to the backend immediatly (as it does
already for response data), based on the env "proxy-flushall".

It should address this issue, and in the same time allow protocols like MS
RPC-over-HTTP to work with :
   SetEnvIf Request_Method ^RPC_IN_DATA proxy-flushall

The patch does preserve the CL when the client gives one, unless some input
filters have been inserted (DEFLATE...).
In no case, when the env is there, the patch will spool the data, using
chunked encoding when CL was provided but might have change (input_filters
!= proto_input_filters).

Maybe you can give it a chance here...

[patch]
Index: modules/proxy/mod_proxy_http.c
===================================================================
--- modules/proxy/mod_proxy_http.c    (revision 1528080)
+++ modules/proxy/mod_proxy_http.c    (working copy)
@@ -244,10 +244,15 @@ static int stream_reqbody_chunked(apr_pool_t *p,
     apr_bucket_alloc_t *bucket_alloc = r->connection->bucket_alloc;
     apr_bucket_brigade *bb;
     apr_bucket *e;
+    int flushall = 0;

     add_te_chunked(p, bucket_alloc, header_brigade);
     terminate_headers(bucket_alloc, header_brigade);

+    if (apr_table_get(r->subprocess_env, "proxy-flushall")) {
+        flushall = 1;
+    }
+
     while (!APR_BUCKET_IS_EOS(APR_BRIGADE_FIRST(input_brigade)))
     {
         char chunk_hdr[20];  /* must be here due to transient bucket. */
@@ -305,7 +310,8 @@ static int stream_reqbody_chunked(apr_pool_t *p,
         }

         /* The request is flushed below this loop with chunk EOS header */
-        rv = ap_proxy_pass_brigade(bucket_alloc, r, p_conn, origin, bb, 0);
+        rv = ap_proxy_pass_brigade(bucket_alloc, r, p_conn, origin, bb,
+                                   flushall);
         if (rv != OK) {
             return rv;
         }
@@ -371,6 +377,7 @@ static int stream_reqbody_cl(apr_pool_t *p,
     apr_off_t cl_val = 0;
     apr_off_t bytes;
     apr_off_t bytes_streamed = 0;
+    int flushall = 0;

     if (old_cl_val) {
         char *endstr;
@@ -387,6 +394,10 @@ static int stream_reqbody_cl(apr_pool_t *p,
     }
     terminate_headers(bucket_alloc, header_brigade);

+    if (apr_table_get(r->subprocess_env, "proxy-flushall")) {
+        flushall = 1;
+    }
+
     while (!APR_BUCKET_IS_EOS(APR_BRIGADE_FIRST(input_brigade)))
     {
         apr_brigade_length(input_brigade, 1, &bytes);
@@ -450,7 +461,8 @@ static int stream_reqbody_cl(apr_pool_t *p,
         }

         /* Once we hit EOS, we are ready to flush. */
-        rv = ap_proxy_pass_brigade(bucket_alloc, r, p_conn, origin, bb,
seen_eos);
+        rv = ap_proxy_pass_brigade(bucket_alloc, r, p_conn, origin, bb,
+                                   (seen_eos || flushall));
         if (rv != OK) {
             return rv ;
         }
@@ -700,6 +712,7 @@ int ap_proxy_http_request(apr_pool_t *p, request_r
     apr_off_t bytes;
     int force10, rv;
     conn_rec *origin = p_conn->connection;
+    int flushall = 0;

     if (apr_table_get(r->subprocess_env, "force-proxy-request-1.0")) {
         if (r->expecting_100) {
@@ -710,6 +723,10 @@ int ap_proxy_http_request(apr_pool_t *p, request_r
         force10 = 0;
     }

+    if (apr_table_get(r->subprocess_env, "proxy-flushall")) {
+        flushall = 1;
+    }
+
     header_brigade = apr_brigade_create(p, origin->bucket_alloc);
     rv = ap_proxy_create_hdrbrgd(p, header_brigade, r, p_conn,
                                  worker, conf, uri, url, server_portstr,
@@ -822,7 +839,8 @@ int ap_proxy_http_request(apr_pool_t *p, request_r
      * (an arbitrary value.)
      */
     } while ((bytes_read < MAX_MEM_SPOOL - 80)
-              && !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade)));
+              && !APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(input_brigade))
+              && !flushall);

     /* Use chunked request body encoding or send a content-length body?
      *
@@ -876,7 +894,8 @@ int ap_proxy_http_request(apr_pool_t *p, request_r
         if (force10
              || (apr_table_get(r->subprocess_env, "proxy-sendcl")
                   && !apr_table_get(r->subprocess_env, "proxy-sendchunks")
-                  && !apr_table_get(r->subprocess_env,
"proxy-sendchunked"))) {
+                  && !apr_table_get(r->subprocess_env, "proxy-sendchunked")
+                  && !flushall)) {
             rb_method = RB_SPOOL_CL;
         }
         else {
@@ -889,7 +908,8 @@ int ap_proxy_http_request(apr_pool_t *p, request_r
         }
         else if (!force10
                   && (apr_table_get(r->subprocess_env, "proxy-sendchunks")
-                      || apr_table_get(r->subprocess_env,
"proxy-sendchunked"))
+                      || apr_table_get(r->subprocess_env,
"proxy-sendchunked")
+                      || flushall)
                   && !apr_table_get(r->subprocess_env, "proxy-sendcl")) {
             rb_method = RB_STREAM_CHUNKED;
         }
[/patch]


Regards,
Yann.




On Tue, Oct 1, 2013 at 2:36 PM, Plüm, Rüdiger, Vodafone Group <
ruediger.pluem@vodafone.com> wrote:

>
>
> > -----Ursprüngliche Nachricht-----
> > Von: Joe Orton
> > Gesendet: Dienstag, 1. Oktober 2013 14:23
> > An: Plüm, Rüdiger, Vodafone Group
> > Cc: dev@httpd.apache.org
> > Betreff: Re: mod_proxy, oooled backend connections and the keep-alive
> > race condition
> >
> > On Fri, Aug 02, 2013 at 12:33:58PM +0000, Plüm, Rüdiger, Vodafone Group
> > wrote:
> > > The typical way to solve this today is to know the keepalive timeout
> > > of the backend and set ttl for this worker to a value a few seconds
> > > below.
> >
> > I just looked at a case where the user is hitting this problem and does
> > already have the ttl & keepalive-timeout configured like that; and a few
> > seconds difference is not necessarily enough.  The problem is that the
> > "prefetch" of the 16K request body is making it a lot worse - the worst
> > case is (16K/packet size) * Timeout seconds to complete the "prefetch".
>
> True.
>
> >
> > That's time when the proxy *thinks* the connection is valid but the
> > backend thinks the connection is idle.  And in most reverse-proxy cases
> > that prefetch is adding basically no value AFAICT - the backend is a
> > known vintage and probably HTTP/1.1.  So... could we make the prefetch
> > stuff configurable away?
>
> IMHO no issue with this. Let's hear what others say.
> I guess the main point of prefetch was to make better decisions whether to
> use chunked
> encoding when sending to the backend. Or provide a CL to the backend when
> the real client does not.
>
> Regards
>
> Rüdiger
>
>

Mime
View raw message