Return-Path: Delivered-To: apmail-httpd-dev-archive@www.apache.org Received: (qmail 18935 invoked from network); 22 Oct 2008 19:01:31 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 22 Oct 2008 19:01:31 -0000 Received: (qmail 7370 invoked by uid 500); 22 Oct 2008 19:01:26 -0000 Delivered-To: apmail-httpd-dev-archive@httpd.apache.org Received: (qmail 7329 invoked by uid 500); 22 Oct 2008 19:01:26 -0000 Mailing-List: contact dev-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list dev@httpd.apache.org Received: (qmail 7320 invoked by uid 99); 22 Oct 2008 19:01:26 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 Oct 2008 12:01:26 -0700 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [140.211.11.9] (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with SMTP; Wed, 22 Oct 2008 19:00:24 +0000 Received: (qmail 18527 invoked by uid 2161); 22 Oct 2008 19:01:03 -0000 Received: from [192.168.2.4] (euler.heimnetz.de [192.168.2.4]) by cerberus.heimnetz.de (Postfix on SuSE Linux 7.0 (i386)) with ESMTP id 95C8C1721C for ; Wed, 22 Oct 2008 21:00:48 +0200 (CEST) Message-ID: <48FF7881.8050703@apache.org> Date: Wed, 22 Oct 2008 21:01:21 +0200 From: Ruediger Pluem User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.17) Gecko/20080829 SeaMonkey/1.1.12 MIME-Version: 1.0 To: dev@httpd.apache.org Subject: Re: Memory leak in mod_proxy_http and in mpm_event components. References: <20081022134922.229ade11@grimnir> <48FF414C.9000600@movik.net> In-Reply-To: X-Enigmail-Version: 0.95.7 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Virus-Checked: Checked by ClamAV on apache.org On 10/22/2008 05:33 PM, Jim Jagielski wrote: > > On Oct 22, 2008, at 11:05 AM, Manik Taneja wrote: > >> >> With regards to the patch, it seems that the changes in event.c have >> no effect. We have anyway seen the leak with both worker and event >> mpm. The changes in scoreboard.c just ensure that memory pointed to by >> new_sbh is not lost everytime ap_create_sb_handle is called. >> > > But this implies that any code which calls ap_create_sb_handle() needs > to ensure that *new_sbh isn't NULL, and, iirc, most just define them > (eg: ap_sb_handle_t *sbh;) so the value is garbage. Thus, this seems > to also imply a change to the API (since current code will break > with the ap_create_sb_handle() change. > >> I have tested this fix and the memory consumption has dropped >> drastically, however I still do observe a slow leak at times. I >> haven't got the opportunity to spend a whole lot of time testing the >> fix. Its on my plate however, and I'll send out my findings to the >> group sometime next week. >> > > Agreed that the mod_proxy_http.c patch looks reasonable at 1st > blush, but need to look into all possible interaction with the > change to a shorter-lived pool. There is an explanation above the code why we should take connection->pool, but IMHO it is outdated and goes back to the times where there was a direct connection between the backend connection and the frontend connection. With the usage of the connection pools this is no longer the case and I cannot think of any reason why not using r->pool instead. I used a slightly modified version of the patch: Index: modules/proxy/mod_proxy_http.c =================================================================== --- modules/proxy/mod_proxy_http.c (Revision 707022) +++ modules/proxy/mod_proxy_http.c (Arbeitskopie) @@ -1869,23 +1869,9 @@ const char *u; proxy_conn_rec *backend = NULL; int is_ssl = 0; - - /* Note: Memory pool allocation. - * A downstream keepalive connection is always connected to the existence - * (or not) of an upstream keepalive connection. If this is not done then - * load balancing against multiple backend servers breaks (one backend - * server ends up taking 100% of the load), and the risk is run of - * downstream keepalive connections being kept open unnecessarily. This - * keeps webservers busy and ties up resources. - * - * As a result, we allocate all sockets out of the upstream connection - * pool, and when we want to reuse a socket, we check first whether the - * connection ID of the current upstream connection is the same as that - * of the connection when the socket was opened. - */ - apr_pool_t *p = r->connection->pool; + apr_pool_t *p = r->pool; conn_rec *c = r->connection; - apr_uri_t *uri = apr_palloc(r->connection->pool, sizeof(*uri)); + apr_uri_t *uri = apr_palloc(p, sizeof(*uri)); /* find the scheme */ u = strchr(url, ':'); @@ -1893,7 +1879,7 @@ return DECLINED; if ((u - url) > 14) return HTTP_BAD_REQUEST; - scheme = apr_pstrndup(c->pool, url, u - url); + scheme = apr_pstrndup(p, url, u - url); /* scheme is lowercase */ ap_str_tolower(scheme); /* is it for us? */ and it passes all perl framework tests. Regards R�diger