httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William A. Rowe, Jr." <wr...@rowe-clan.net>
Subject RE: SEGV in allocator_free
Date Sat, 20 Mar 2004 06:12:30 GMT
At 07:47 PM 3/19/2004, Mathihalli, Madhusudan wrote:
>>-----Original Message-----
>>From: William A. Rowe, Jr. [mailto:wrowe@rowe-clan.net]
>>
>>At 01:30 PM 3/19/2004, Mathihalli, Madhusudan wrote:
>>>>-----Original Message-----
>>>>From: Sander Striker [mailto:striker@apache.org]
>>>[SNIP]
>>>>
>>>>allocator = 0x0, that's bad.  You didn't do a full httpd rebuild, so
>>>>there is no way of telling what pool this is.  Can you do a full
>>>>rebuild (with pool debugging enabled)?  Is this vanilla httpd-2.0.48?
>>>
>>>Pretty much - with some minor fixes for HP-UX, and some SSL 
>>fixes that've gone into the 2.0.49 release.
>>>(fix mem leak and send the 'close-alert' message)
>>
>>so the mem leak fix is there?
>>
>>if the segfault reoccurs - would you validate that the vanilla 
>>2.0.48 suffered
>>the same segv?
>
>Here's the stack trace of the SEGV with 2.0.49:
>
>Frame 14 is apr_pool_clear and so is Frame 1 ! Is there some sort of a recursion happening
?

This is just one little bit, a subpool likely....

>#1  0xc000000001babdc0:0 in apr_pool_clear (pool=0x60000000006cf1c8)
>    at apr_pools.c:713
>#2  0x40000000000b5690:0 in core_output_filter+0x1cd0 ()

in the grand scheme of (apparently) a connection pool clear:

>#12 0xc000000001ebecc0:0 in ssl_io_filter_cleanup+0xd0 ()
>   from /opt/apache2.0.49/modules/mod_ssl.so
>#13 0xc000000001badb60:0 in run_cleanups (cref=0x6000000000224ec8)
>    at apr_pools.c:1951
>#14 0xc000000001babca0:0 in apr_pool_clear (pool=0x6000000000224ea8)
>    at apr_pools.c:693
>#15 0x400000000005c940:0 in worker_thread+0x5a0 ()
>#16 0xc000000001b9b220:0 in dummy_worker (opaque=0x60000000000be908)
>    at thread.c:88
>#17 0xc0000000000a21a0:0 in __pthread_unbound_body+0x490 ()
>   from /usr/lib/hpux64/libpthread.so.1

This proves it to be as subpool...

>(gdb) fr 1
>#1  0xc000000001babdc0:0 in apr_pool_clear (pool=0x60000000006cf1c8)
>    at apr_pools.c:713
>713     in apr_pools.c
>(gdb) p *pool
>$6 = {parent = 0x6000000000224ea8, child = 0x0, sibling = 0x6000000000459268, 
>  ref = 0x6000000000224eb0, cleanups = 0x0, allocator = 0x60000000001cafb0, 
>  subprocesses = 0x0, abort_fn = 0, user_data = 0x0, tag = 0x0, 
>  active = 0x60000000006cf1a0, self = 0x60000000006cf1a0, 
>  self_first_avail = 0x60000000006cf230 "`"}

of this pool ...

>(gdb) fr 14
>#14 0xc000000001babca0:0 in apr_pool_clear (pool=0x6000000000224ea8)
>    at apr_pools.c:693
>693     in apr_pools.c
>(gdb) p *pool
>$8 = {parent = 0x600000000001f9f8, child = 0x0, sibling = 0x6000000000222e88, 
>  ref = 0x6000000000226ed8, cleanups = 0x60000000005d42a0, 
>  allocator = 0x60000000001cafb0, subprocesses = 0x0, abort_fn = 0, 
>  user_data = 0x0, tag = 0x4000000000037220 "transaction", 
>  active = 0x60000000005d01c0, self = 0x6000000000224e80, 
>  self_first_avail = 0x6000000000224f10 "`"}

which is likely a child of the process pool.

Suggestion - as a subpool it is cleared by the subpool mop-up, followed
by an ugly explicit release in core_output_filter.





Mime
View raw message