httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manik Taneja <mtan...@movik.net>
Subject Re: Memory leak in mod_proxy_http and in mpm_event components.
Date Thu, 30 Oct 2008 16:48:48 GMT
Hi,

I tested the patch that you had sent out earlier.  It seems that memory 
consumption still continues to grow, albeit at a much slower rate. The 
test was a standard polygraph setup, configured to do a peak rate of 
2000 req/sec. I ran the test for about 70 minutes and a total of about 8 
million request are sent to the apache server during the lifetime of the 
test.

My keepalive settings on apache are as follows.

KeepAlive On
KeepAliveTimeout 900
# really large, we want connections to be terminated because they were idle
MaxKeepAliveRequests 2000000 

I have attached two graphs. the first graph shows the memory consumption 
over time of the apache server running with the fix that Ruediger 
provided earlier. The second graph contains the additional fix that my 
colleague Harish had sent out a while ago.  (Attaching the patch here 
for convenience).

So it seems that there is a slow-leak perhaps we might be hitting the 
problem that has been described in the following post.

http://www.nabble.com/apr-pools---memory-leaks-td19766166.html

Cheers,
Manik


==== //depot/httproxy/httpd-2.2.9/server/scoreboard.c#1 - 
/root/work/proxy/depot/httproxy/httpd-2.2.9/server/scoreboard.c ====
--- /tmp/tmp.5484.38    2008-10-30 22:12:34.000000000 +0530
+++ /root/work/proxy/depot/httproxy/httpd-2.2.9/server/scoreboard.c    
2008-10-30 17:23:07.000000000 +0530
@@ -378,7 +378,10 @@ int find_child_by_pid(apr_proc_t *pid)
 AP_DECLARE(void) ap_create_sb_handle(ap_sb_handle_t **new_sbh, 
apr_pool_t *p,
                                      int child_num, int thread_num)
 {
-    *new_sbh = (ap_sb_handle_t *)apr_palloc(p, sizeof(ap_sb_handle_t));
+    // fix for bugId: 9
+    if ( *new_sbh == NULL )
+      *new_sbh = (ap_sb_handle_t *)apr_palloc(p, sizeof(ap_sb_handle_t));
+    // ends of fix
     (*new_sbh)->child_num = child_num;
     (*new_sbh)->thread_num = thread_num;
 }



Manik Taneja wrote:
> thanks Ruediger, I see that you have a caught another instance of the 
> leak.
> i'll give your patch a run and let you know the results in a day or two.
>
> Regards,
> Manik
>
> On 23-Oct-08, at 12:31 AM, Ruediger Pluem wrote:
>
>>
>>
>> On 10/22/2008 05:33 PM, Jim Jagielski wrote:
>>>
>>> On Oct 22, 2008, at 11:05 AM, Manik Taneja wrote:
>>>
>>>>
>>>> With regards to the patch, it seems that the changes in event.c have
>>>> no effect. We have anyway seen the leak with both worker and event
>>>> mpm. The changes in scoreboard.c just ensure that memory pointed to by
>>>> new_sbh is not lost everytime ap_create_sb_handle is called.
>>>>
>>>
>>> But this implies that any code which calls ap_create_sb_handle() needs
>>> to ensure that *new_sbh isn't NULL, and, iirc, most just define them
>>> (eg: ap_sb_handle_t *sbh;) so the value is garbage. Thus, this seems
>>> to also imply a change to the API (since current code will break
>>> with the ap_create_sb_handle() change.
>>>
>>>> I have tested this fix and the memory consumption has dropped
>>>> drastically, however I still do observe a slow leak at times. I
>>>> haven't got the opportunity to spend a whole lot of time testing the
>>>> fix. Its on my plate however, and I'll send out my findings to the
>>>> group sometime next week.
>>>>
>>>
>>> Agreed that the mod_proxy_http.c patch looks reasonable at 1st
>>> blush, but need to look into all possible interaction with the
>>> change to a shorter-lived pool.
>>
>> There is an explanation above the code why we should take 
>> connection->pool,
>> but IMHO it is outdated and goes back to the times where there was a 
>> direct
>> connection between the backend connection and the frontend connection.
>> With the usage of the connection pools this is no longer the case and I
>> cannot think of any reason why not using r->pool instead.
>>
>> I used a slightly modified version of the patch:
>>
>> Index: modules/proxy/mod_proxy_http.c
>> ===================================================================
>> --- modules/proxy/mod_proxy_http.c      (Revision 707022)
>> +++ modules/proxy/mod_proxy_http.c      (Arbeitskopie)
>> @@ -1869,23 +1869,9 @@
>>     const char *u;
>>     proxy_conn_rec *backend = NULL;
>>     int is_ssl = 0;
>> -
>> -    /* Note: Memory pool allocation.
>> -     * A downstream keepalive connection is always connected to the 
>> existence
>> -     * (or not) of an upstream keepalive connection. If this is not 
>> done then
>> -     * load balancing against multiple backend servers breaks (one 
>> backend
>> -     * server ends up taking 100% of the load), and the risk is run of
>> -     * downstream keepalive connections being kept open 
>> unnecessarily. This
>> -     * keeps webservers busy and ties up resources.
>> -     *
>> -     * As a result, we allocate all sockets out of the upstream 
>> connection
>> -     * pool, and when we want to reuse a socket, we check first 
>> whether the
>> -     * connection ID of the current upstream connection is the same 
>> as that
>> -     * of the connection when the socket was opened.
>> -     */
>> -    apr_pool_t *p = r->connection->pool;
>> +    apr_pool_t *p = r->pool;
>>     conn_rec *c = r->connection;
>> -    apr_uri_t *uri = apr_palloc(r->connection->pool, sizeof(*uri));
>> +    apr_uri_t *uri = apr_palloc(p, sizeof(*uri));
>>
>>     /* find the scheme */
>>     u = strchr(url, ':');
>> @@ -1893,7 +1879,7 @@
>>        return DECLINED;
>>     if ((u - url) > 14)
>>         return HTTP_BAD_REQUEST;
>> -    scheme = apr_pstrndup(c->pool, url, u - url);
>> +    scheme = apr_pstrndup(p, url, u - url);
>>     /* scheme is lowercase */
>>     ap_str_tolower(scheme);
>>     /* is it for us? */
>>
>> and it passes all perl framework tests.
>>
>> Regards
>>
>> RĂ¼diger
>


Mime
View raw message