httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Bloom <...@covalent.net>
Subject Re: segv after client closes connection
Date Thu, 15 Nov 2001 04:46:01 GMT
On Wednesday 14 November 2001 08:22 pm, Ryan Bloom wrote:

Okay, I have isolated this bug finally.  This is a bit of a stickler.

Essentially, what is happening, is that we register a cleanup on the
connection pool to call lingering_close.  We have a sub-pool from the
connection pool in the core_output_filter.  The bug happens anytime we
have not written all of the data when we call lingering_close.  Because
we clear all sub-pools before calling the cleanups, we end up calling
into the core_output_filter, looking for a sub-pool that doesn't exist
anymore.

I'm still looking for solutions to the bug.

Ryan

> On Wednesday 14 November 2001 08:08 pm, Brian Pane wrote:
>
> I have an even more repeatable case.  :-)
>
> telnet localhost 80
> CONNECT www.google.com HTTP/1.0
>
>
> This always produces the same segfault.  This is on my list for tonight.
>
> Ryan
>
> > I'm seeing a repeatable crash with the current CVS head.
> > Test case:
> >   * Prefork mpm on Linux
> >   * Run ab -c1 -n {some large number} {url}
> >   * While ab is running, kill it to cause a SIGPIPE
> >     in the httpd.
> >
> > Program received signal SIGSEGV, Segmentation fault.
> > [Switching to Thread 1024 (LWP 30860)]
> > 0x4003cc96 in apr_pool_clear (a=0x80fea44) at apr_pools.c:957
> > 957        free_blocks(a->first->h.next);
> >
> > (gdb) where
> > #0  0x4003cc96 in apr_pool_clear (a=0x80fea44) at apr_pools.c:957
> > #1  0x0808c3c8 in core_output_filter (f=0x80f8d4c, b=0x0) at core.c:3220
> > #2  0x08085654 in ap_pass_brigade (next=0x80f8d4c, bb=0x80f909c)
> >     at util_filter.c:276
> > #3  0x08084083 in ap_flush_conn (c=0x80f8b24) at connection.c:142
> > #4  0x080840d5 in ap_lingering_close (dummy=0x80f8b14) at
> > connection.c:179 #5  0x4003cb24 in run_cleanups (c=0x80f908c) at
> > apr_pools.c:833
> > #6  0x4003cc7c in apr_pool_clear (a=0x80f8a14) at apr_pools.c:949
> > #7  0x080799df in child_main (child_num_arg=0) at prefork.c:598
> > #8  0x08079cc5 in make_child (s=0x80b0a2c, slot=0) at prefork.c:770
> > #9  0x08079f6a in perform_idle_server_maintenance (p=0x80af7cc)
> >     at prefork.c:911
> > #10 0x0807a27e in ap_mpm_run (_pconf=0x80af7cc, plog=0x80e396c,
> > s=0x80b0a2c) at prefork.c:1069
> > #11 0x0807f21c in main (argc=1, argv=0xbffffa1c) at main.c:432
> > #12 0x40114177 in __libc_start_main (main=0x807ecdc <main>, argc=1,
> >     ubp_av=0xbffffa1c, init=0x805c950 <_init>, fini=0x8096440 <_fini>,
> >     rtld_fini=0x4000e184 <_dl_fini>, stack_end=0xbffffa0c)
> >     at ../sysdeps/generic/libc-start.c:129
> >
> > (gdb) print *a
> > $1 = {first = 0x0, last = 0x80fea38, cleanups = 0x0, subprocesses = 0x0,
> >   sub_pools = 0x0, sub_next = 0x813bb94, sub_prev = 0x0, parent =
> > 0x80f8a14,
> >   free_first_avail = 0x80fea74 "Dê\017\bxê\017\bxê\017\b", apr_abort = 0,
> >   prog_data = 0x0}
> >
> > (gdb)  print *a->parent
> > $2 = {first = 0x80f8a08, last = 0x80f8a08, cleanups = 0x80f908c,
> >   subprocesses = 0x0, sub_pools = 0x0, sub_next = 0x0, sub_prev = 0x0,
> >   parent = 0x80e798c, free_first_avail = 0x80f8a44 "\024\212\017\b\t",
> >   apr_abort = 0, prog_data = 0x0}
> > (gdb)

-- 

______________________________________________________________
Ryan Bloom				rbb@apache.org
Covalent Technologies			rbb@covalent.net
--------------------------------------------------------------

Mime
View raw message