httpd-cvs mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rpl...@apache.org
Subject svn commit: r657440 - in /httpd/httpd/branches/2.2.x: CHANGES STATUS include/ap_mmn.h modules/proxy/mod_proxy.h modules/proxy/mod_proxy_ftp.c modules/proxy/mod_proxy_http.c modules/proxy/proxy_util.c server/scoreboard.c
Date Sat, 17 May 2008 19:42:03 GMT
Author: rpluem
Date: Sat May 17 12:42:03 2008
New Revision: 657440

URL: http://svn.apache.org/viewvc?rev=657440&view=rev
Log:
* Merge r602542, r603237, r603502, r603543, r604447, r604449, r605314,
  r605838 from trunk:

* Enable the proxy to keep connections persistent in the HTTPS case.

  Basicly the persistence is created by keeping the conn_rec structure
  created for our backend connection (whether http or https) in the connection
  pool. This required to adjust scoreboard.c in a way that its functions can
  properly deal with a NULL scoreboard handle by ignoring the call or returning
  an error code.

* Use a separate subpool to manage the data for the socket and the connection
  member of the proxy_conn_rec struct as we destroy this data more frequently
  than other data in the proxy_conn_rec struct like hostname and addr (at least
  in the case where we have keepalive connections that timed out and were
  closed by the backend).
  This fixes a memory leak with short lived and broken connections.

* Fix another memory leak related to PR 44026. Now that we keep the connection
  data structure alive in the reslist, the live time of c->pool is too long.
  r->pool has the correct live time since rp dies before r.

* Do not register connection_cleanup as cleanup for the conn->pool. In the past
  it was needed to register connection_cleanup as a cleanup for the frontend
  connection memory pool (c->pool) to ensure that connection returns into the
  connection pool if the memory pool of the frontend connection memory pool
  gets destroyed / cleared. Now we ensure explicitly the connection returns
  to the connection pool once we finished handling the request.

* Tag the pools appropriately to ease memory debugging.

* Only sent a flush bucket down the chain if buckets where sent down the chain
  before that could still be buffered in the network filter. This is the case
  if we have sent an EOS bucket or if we actually sent buckets with
  data down the chain. In all other cases we either have not sent any
  buckets at all down the chain or we only sent meta buckets that are
  not EOS buckets down the chain. The only meta bucket that remains in
  this case is the flush bucket which would have removed all possibly
  buffered buckets in the network filter.
  If we sent a flush bucket in the case where not ANY buckets were
  sent down the chain, we break error handling which happens AFTER us.

* Using the reslist pool for the proxy_conn_rec structure introduces a memory
  leak when connections get created and destroyed frequently by the reslist
  (e.g.  destroying idle elements of the reslist). So use the subpool
  dedicated for the proxy_conn_rec structure to allocate the memory for the
  structure itself.

PR: 44026, 44543
Submitted by: rpluem
Reviewed by: jim, rpluem, fielding

Modified:
    httpd/httpd/branches/2.2.x/CHANGES
    httpd/httpd/branches/2.2.x/STATUS
    httpd/httpd/branches/2.2.x/include/ap_mmn.h
    httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy.h
    httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_ftp.c
    httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_http.c
    httpd/httpd/branches/2.2.x/modules/proxy/proxy_util.c
    httpd/httpd/branches/2.2.x/server/scoreboard.c

Modified: httpd/httpd/branches/2.2.x/CHANGES
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/CHANGES?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/CHANGES [utf-8] (original)
+++ httpd/httpd/branches/2.2.x/CHANGES [utf-8] Sat May 17 12:42:03 2008
@@ -73,6 +73,12 @@
      didn't pick up on updated sdbm maps due to this.
      PR41190 [Niklas Edmundsson]
 
+  *) mod_proxy: Lower memory consumption for short lived connections.
+     PR 44026. [Ruediger Pluem]
+
+  *) mod_proxy: Keep connections to the backend persistent in the HTTPS case.
+     [Ruediger Pluem]
+
   *) Don't add bogus duplicate Content-Language entries
      PR 11035 [Davi Arnaut]
 

Modified: httpd/httpd/branches/2.2.x/STATUS
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/STATUS?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/STATUS (original)
+++ httpd/httpd/branches/2.2.x/STATUS Sat May 17 12:42:03 2008
@@ -90,21 +90,6 @@
 PATCHES ACCEPTED TO BACKPORT FROM TRUNK:
   [ start all new proposals below, under PATCHES PROPOSED. ]
 
- * mod_proxy: Allow for keepalive backend proxies (PR43238), which also
-   addresses PR44026 and PR44543. These are pretty much interwrapped here.
-    Trunk version of patch:
-      http://svn.apache.org/viewvc?view=rev&revision=602542
-      http://svn.apache.org/viewvc?view=rev&revision=603237
-      http://svn.apache.org/viewvc?view=rev&revision=603502
-      http://svn.apache.org/viewvc?view=rev&revision=603543
-      http://svn.apache.org/viewvc?view=rev&revision=604447
-      http://svn.apache.org/viewvc?view=rev&revision=604449
-      http://svn.apache.org/viewvc?view=rev&revision=605314
-      http://svn.apache.org/viewvc?view=rev&revision=605838
-    Backport version for 2.2.x of patch:
-      http://people.apache.org/~rpluem/patches/proxy-ssl-44026-patch.txt
-   +1: jim, rpluem, fielding
-
  * mod_proxy:  In the case that we fail to read the response line
    from the backend and if we are a reverse proxy request, shutdown
    the connection WITHOUT ANY response to trigger a retry by the client

Modified: httpd/httpd/branches/2.2.x/include/ap_mmn.h
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/include/ap_mmn.h?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/include/ap_mmn.h (original)
+++ httpd/httpd/branches/2.2.x/include/ap_mmn.h Sat May 17 12:42:03 2008
@@ -123,6 +123,9 @@
  * 20051115.12(2.2.8)  Add optional function ap_logio_add_bytes_in() to mog_logio
  * 20051115.13(2.2.9)  Add disablereuse and disablereuse_set
  *                     to proxy_worker struct (minor)
+ * 20051115.14(2.2.9)  Add ap_proxy_ssl_connection_cleanup and
+ *                     add *scpool, *r and need_flush to proxy_conn_rec
+ *                     structure
  *
  */
 
@@ -131,7 +134,7 @@
 #ifndef MODULE_MAGIC_NUMBER_MAJOR
 #define MODULE_MAGIC_NUMBER_MAJOR 20051115
 #endif
-#define MODULE_MAGIC_NUMBER_MINOR 13                    /* 0...n */
+#define MODULE_MAGIC_NUMBER_MINOR 14                    /* 0...n */
 
 /**
  * Determine if the server's current MODULE_MAGIC_NUMBER is at least a

Modified: httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy.h
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy.h?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy.h (original)
+++ httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy.h Sat May 17 12:42:03 2008
@@ -220,7 +220,7 @@
     const char   *hostname;
     apr_port_t   port;
     int          is_ssl;
-    apr_pool_t   *pool;     /* Subpool used for creating socket */
+    apr_pool_t   *pool;     /* Subpool for hostname and addr data */
     apr_socket_t *sock;     /* Connection socket */
     apr_sockaddr_t *addr;   /* Preparsed remote address info */
     apr_uint32_t flags;     /* Conection flags */
@@ -231,6 +231,11 @@
 #if APR_HAS_THREADS
     int          inreslist; /* connection in apr_reslist? */
 #endif
+    apr_pool_t   *scpool;   /* Subpool used for socket and connection data */
+    request_rec  *r;        /* Request record of the frontend request
+                             * which the backend currently answers. */
+    int          need_flush;/* Flag to decide whether we need to flush the
+                             * filter chain or not */
 } proxy_conn_rec;
 
 typedef struct {
@@ -475,6 +480,8 @@
 PROXY_DECLARE(void) ap_proxy_table_unmerge(apr_pool_t *p, apr_table_t *t, char *key);
 /* DEPRECATED (will be replaced with ap_proxy_connect_backend */
 PROXY_DECLARE(int) ap_proxy_connect_to_backend(apr_socket_t **, const char *, apr_sockaddr_t
*, const char *, proxy_server_conf *, server_rec *, apr_pool_t *);
+PROXY_DECLARE(apr_status_t) ap_proxy_ssl_connection_cleanup(proxy_conn_rec *conn,
+                                                            request_rec *r);
 PROXY_DECLARE(int) ap_proxy_ssl_enable(conn_rec *c);
 PROXY_DECLARE(int) ap_proxy_ssl_disable(conn_rec *c);
 PROXY_DECLARE(int) ap_proxy_conn_is_https(conn_rec *c);

Modified: httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_ftp.c
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_ftp.c?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_ftp.c (original)
+++ httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_ftp.c Sat May 17 12:42:03 2008
@@ -961,6 +961,7 @@
         }
         /* TODO: see if ftp could use determine_connection */
         backend->addr = connect_addr;
+        backend->r = r;
         ap_set_module_config(c->conn_config, &proxy_ftp_module, backend);
     }
 

Modified: httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_http.c
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_http.c?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_http.c (original)
+++ httpd/httpd/branches/2.2.x/modules/proxy/mod_proxy_http.c Sat May 17 12:42:03 2008
@@ -1850,14 +1850,9 @@
 
 
     backend->is_ssl = is_ssl;
-    /*
-     * TODO: Currently we cannot handle persistent SSL backend connections,
-     * because we recreate backend->connection for each request and thus
-     * try to initialize an already existing SSL connection. This does
-     * not work.
-     */
-    if (is_ssl)
-        backend->close_on_recycle = 1;
+    if (is_ssl) {
+        ap_proxy_ssl_connection_cleanup(backend, r);
+    }
 
     /* Step One: Determine Who To Connect To */
     if ((status = ap_proxy_determine_connection(p, r, conf, worker, backend,

Modified: httpd/httpd/branches/2.2.x/modules/proxy/proxy_util.c
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/modules/proxy/proxy_util.c?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/modules/proxy/proxy_util.c (original)
+++ httpd/httpd/branches/2.2.x/modules/proxy/proxy_util.c Sat May 17 12:42:03 2008
@@ -332,16 +332,16 @@
 
 PROXY_DECLARE(request_rec *)ap_proxy_make_fake_req(conn_rec *c, request_rec *r)
 {
-    request_rec *rp = apr_pcalloc(c->pool, sizeof(*r));
+    request_rec *rp = apr_pcalloc(r->pool, sizeof(*r));
 
-    rp->pool            = c->pool;
+    rp->pool            = r->pool;
     rp->status          = HTTP_OK;
 
-    rp->headers_in      = apr_table_make(c->pool, 50);
-    rp->subprocess_env  = apr_table_make(c->pool, 50);
-    rp->headers_out     = apr_table_make(c->pool, 12);
-    rp->err_headers_out = apr_table_make(c->pool, 5);
-    rp->notes           = apr_table_make(c->pool, 5);
+    rp->headers_in      = apr_table_make(r->pool, 50);
+    rp->subprocess_env  = apr_table_make(r->pool, 50);
+    rp->headers_out     = apr_table_make(r->pool, 12);
+    rp->err_headers_out = apr_table_make(r->pool, 5);
+    rp->notes           = apr_table_make(r->pool, 5);
 
     rp->server = r->server;
     rp->proxyreq = r->proxyreq;
@@ -352,7 +352,7 @@
     rp->proto_output_filters  = c->output_filters;
     rp->proto_input_filters   = c->input_filters;
 
-    rp->request_config  = ap_create_request_config(c->pool);
+    rp->request_config  = ap_create_request_config(r->pool);
     proxy_run_create_req(r, rp);
 
     return rp;
@@ -1369,6 +1369,7 @@
      * it can be disabled.
      */
     apr_pool_create(&pool, p);
+    apr_pool_tag(pool, "proxy_worker_cp");
     /*
      * Alloc from the same pool as worker.
      * proxy_conn_pool is permanently attached to the worker.
@@ -1596,6 +1597,9 @@
 {
     proxy_conn_rec *conn = (proxy_conn_rec *)theconn;
     proxy_worker *worker = conn->worker;
+    apr_bucket_brigade *bb;
+    conn_rec *c;
+    request_rec *r;
 
     /*
      * If the connection pool is NULL the worker
@@ -1616,14 +1620,67 @@
     }
 #endif
 
+    r = conn->r;
+    if (conn->need_flush && r && (r->bytes_sent || r->eos_sent))
{
+        /*
+         * We need to ensure that buckets that may have been buffered in the
+         * network filters get flushed to the network. This is needed since
+         * these buckets have been created with the bucket allocator of the
+         * backend connection. This allocator either gets destroyed if
+         * conn->close is set or the worker address is not reusable which
+         * causes the connection to the backend to be closed or it will be used
+         * again by another frontend connection that wants to recycle the
+         * backend connection.
+         * In this case we could run into nasty race conditions (e.g. if the
+         * next user of the backend connection destroys the allocator before we
+         * sent the buckets to the network).
+         *
+         * Remark 1: Only do this if buckets where sent down the chain before
+         * that could still be buffered in the network filter. This is the case
+         * if we have sent an EOS bucket or if we actually sent buckets with
+         * data down the chain. In all other cases we either have not sent any
+         * buckets at all down the chain or we only sent meta buckets that are
+         * not EOS buckets down the chain. The only meta bucket that remains in
+         * this case is the flush bucket which would have removed all possibly
+         * buffered buckets in the network filter.
+         * If we sent a flush bucket in the case where not ANY buckets were
+         * sent down the chain, we break error handling which happens AFTER us.
+         *
+         * Remark 2: Doing a setaside does not help here as the buckets remain
+         * created by the wrong allocator in this case.
+         *
+         * Remark 3: Yes, this creates a possible performance penalty in the case
+         * of pipelined requests as we may send only a small amount of data over
+         * the wire.
+         */
+        c = r->connection;
+        bb = apr_brigade_create(r->pool, c->bucket_alloc);
+        if (r->eos_sent) {
+            /*
+             * If we have already sent an EOS bucket send directly to the
+             * connection based filters. We just want to flush the buckets
+             * if something hasn't been sent to the network yet.
+             */
+            ap_fflush(c->output_filters, bb);
+        }
+        else {
+            ap_fflush(r->output_filters, bb);
+        }
+        apr_brigade_destroy(bb);
+        conn->r = NULL;
+        conn->need_flush = 0;
+    }
+
     /* determine if the connection need to be closed */
     if (conn->close_on_recycle || conn->close || worker->disablereuse ||
         !worker->is_address_reusable) {
         apr_pool_t *p = conn->pool;
-        apr_pool_clear(conn->pool);
-        memset(conn, 0, sizeof(proxy_conn_rec));
+        apr_pool_clear(p);
+        conn = apr_pcalloc(p, sizeof(proxy_conn_rec));
         conn->pool = p;
         conn->worker = worker;
+        apr_pool_create(&(conn->scpool), p);
+        apr_pool_tag(conn->scpool, "proxy_conn_scpool");
     }
 #if APR_HAS_THREADS
     if (worker->hmax && worker->cp->res) {
@@ -1640,11 +1697,54 @@
     return APR_SUCCESS;
 }
 
+static void socket_cleanup(proxy_conn_rec *conn)
+{
+    conn->sock = NULL;
+    conn->connection = NULL;
+    apr_pool_clear(conn->scpool);
+}
+
+PROXY_DECLARE(apr_status_t) ap_proxy_ssl_connection_cleanup(proxy_conn_rec *conn,
+                                                            request_rec *r)
+{
+    apr_bucket_brigade *bb;
+    apr_status_t rv;
+
+    /*
+     * If we have an existing SSL connection it might be possible that the
+     * server sent some SSL message we have not read so far (e.g. a SSL
+     * shutdown message if the server closed the keepalive connection while
+     * the connection was held unused in our pool).
+     * So ensure that if present (=> APR_NONBLOCK_READ) it is read and
+     * processed. We don't expect any data to be in the returned brigade.
+     */
+    if (conn->sock && conn->connection) {
+        bb = apr_brigade_create(r->pool, r->connection->bucket_alloc);
+        rv = ap_get_brigade(conn->connection->input_filters, bb,
+                            AP_MODE_READBYTES, APR_NONBLOCK_READ,
+                            HUGE_STRING_LEN);
+        if ((rv != APR_SUCCESS) && !APR_STATUS_IS_EAGAIN(rv)) {
+            socket_cleanup(conn);
+        }
+        if (!APR_BRIGADE_EMPTY(bb)) {
+            apr_off_t len;
+
+            rv = apr_brigade_length(bb, 0, &len);
+            ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r,
+                          "proxy: SSL cleanup brigade contained %"
+                          APR_OFF_T_FMT " bytes of data.", len);
+        }
+        apr_brigade_destroy(bb);
+    }
+    return APR_SUCCESS;
+}
+
 /* reslist constructor */
 static apr_status_t connection_constructor(void **resource, void *params,
                                            apr_pool_t *pool)
 {
     apr_pool_t *ctx;
+    apr_pool_t *scpool;
     proxy_conn_rec *conn;
     proxy_worker *worker = (proxy_worker *)params;
 
@@ -1654,9 +1754,20 @@
      * when disconnecting from backend.
      */
     apr_pool_create(&ctx, pool);
-    conn = apr_pcalloc(pool, sizeof(proxy_conn_rec));
+    apr_pool_tag(ctx, "proxy_conn_pool");
+    /*
+     * Create another subpool that manages the data for the
+     * socket and the connection member of the proxy_conn_rec struct as we
+     * destroy this data more frequently than other data in the proxy_conn_rec
+     * struct like hostname and addr (at least in the case where we have
+     * keepalive connections that timed out).
+     */
+    apr_pool_create(&scpool, ctx);
+    apr_pool_tag(scpool, "proxy_conn_scpool");
+    conn = apr_pcalloc(ctx, sizeof(proxy_conn_rec));
 
     conn->pool   = ctx;
+    conn->scpool = scpool;
     conn->worker = worker;
 #if APR_HAS_THREADS
     conn->inreslist = 1;
@@ -1925,11 +2036,6 @@
     ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, s,
                  "proxy: %s: has released connection for (%s)",
                  proxy_function, conn->worker->hostname);
-    /* If there is a connection kill it's cleanup */
-    if (conn->connection) {
-        apr_pool_cleanup_kill(conn->connection->pool, conn, connection_cleanup);
-        conn->connection = NULL;
-    }
     connection_cleanup(conn);
 
     return OK;
@@ -1951,6 +2057,8 @@
     apr_status_t err = APR_SUCCESS;
     apr_status_t uerr = APR_SUCCESS;
 
+    conn->r = r;
+
     /*
      * Break up the URL to determine the host to connect to
      */
@@ -2003,14 +2111,7 @@
             conn->hostname = apr_pstrdup(conn->pool, uri->hostname);
             conn->port = uri->port;
         }
-        if (conn->sock) {
-            apr_socket_close(conn->sock);
-            conn->sock = NULL;
-        }
-        if (conn->connection) {
-            apr_pool_cleanup_kill(conn->connection->pool, conn, connection_cleanup);
-            conn->connection = NULL;
-        }
+        socket_cleanup(conn);
         err = apr_sockaddr_info_get(&(conn->addr),
                                     conn->hostname, APR_UNSPEC,
                                     conn->port, 0,
@@ -2154,14 +2255,8 @@
         (proxy_server_conf *) ap_get_module_config(sconf, &proxy_module);
 
     if (conn->sock) {
-        /*
-         * This increases the connection pool size
-         * but the number of dropped connections is
-         * relatively small compared to connection lifetime
-         */
         if (!(connected = is_socket_connected(conn->sock))) {
-            apr_socket_close(conn->sock);
-            conn->sock = NULL;
+            socket_cleanup(conn);
             ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, s,
                          "proxy: %s: backend socket is disconnected.",
                          proxy_function);
@@ -2170,7 +2265,7 @@
     while (backend_addr && !connected) {
         if ((rv = apr_socket_create(&newsock, backend_addr->family,
                                 SOCK_STREAM, APR_PROTO_TCP,
-                                conn->pool)) != APR_SUCCESS) {
+                                conn->scpool)) != APR_SUCCESS) {
             loglevel = backend_addr->next ? APLOG_DEBUG : APLOG_ERR;
             ap_log_error(APLOG_MARK, loglevel, rv, s,
                          "proxy: %s: error creating fam %d socket for target %s",
@@ -2185,6 +2280,7 @@
             backend_addr = backend_addr->next;
             continue;
         }
+        conn->connection = NULL;
 
 #if !defined(TPF) && !defined(BEOS)
         if (worker->recv_buffer_size > 0 &&
@@ -2274,13 +2370,25 @@
     apr_sockaddr_t *backend_addr = conn->addr;
     int rc;
     apr_interval_time_t current_timeout;
+    apr_bucket_alloc_t *bucket_alloc;
+
+    if (conn->connection) {
+        return OK;
+    }
 
     /*
+     * We need to flush the buckets before we return the connection to the
+     * connection pool. See comment in connection_cleanup for why this is
+     * needed.
+     */
+    conn->need_flush = 1;
+    bucket_alloc = apr_bucket_alloc_create(conn->scpool);
+    /*
      * The socket is now open, create a new backend server connection
      */
-    conn->connection = ap_run_create_connection(c->pool, s, conn->sock,
-                                                c->id, c->sbh,
-                                                c->bucket_alloc);
+    conn->connection = ap_run_create_connection(conn->scpool, s, conn->sock,
+                                                0, NULL,
+                                                bucket_alloc);
 
     if (!conn->connection) {
         /*
@@ -2292,17 +2400,9 @@
                      "new connection to %pI (%s)", proxy_function,
                      backend_addr, conn->hostname);
         /* XXX: Will be closed when proxy_conn is closed */
-        apr_socket_close(conn->sock);
-        conn->sock = NULL;
+        socket_cleanup(conn);
         return HTTP_INTERNAL_SERVER_ERROR;
     }
-    /*
-     * register the connection cleanup to client connection
-     * so that the connection can be closed or reused
-     */
-    apr_pool_cleanup_register(c->pool, (void *)conn,
-                              connection_cleanup,
-                              apr_pool_cleanup_null);
 
     /* For ssl connection to backend */
     if (conn->is_ssl) {

Modified: httpd/httpd/branches/2.2.x/server/scoreboard.c
URL: http://svn.apache.org/viewvc/httpd/httpd/branches/2.2.x/server/scoreboard.c?rev=657440&r1=657439&r2=657440&view=diff
==============================================================================
--- httpd/httpd/branches/2.2.x/server/scoreboard.c (original)
+++ httpd/httpd/branches/2.2.x/server/scoreboard.c Sat May 17 12:42:03 2008
@@ -344,6 +344,9 @@
 {
     worker_score *ws;
 
+    if (!sb)
+        return;
+
     ws = &ap_scoreboard_image->servers[sb->child_num][sb->thread_num];
 
 #ifdef HAVE_TIMES
@@ -471,6 +474,9 @@
 AP_DECLARE(int) ap_update_child_status(ap_sb_handle_t *sbh, int status,
                                       request_rec *r)
 {
+    if (!sbh)
+        return -1;
+
     return ap_update_child_status_from_indexes(sbh->child_num, sbh->thread_num,
                                                status, r);
 }
@@ -479,6 +485,9 @@
 {
     worker_score *ws;
 
+    if (!sbh)
+        return;
+
     if (sbh->child_num < 0) {
         return;
     }



Mime
View raw message