httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Pane <>
Subject how to recreate the file descriptor segfault? Re: cvs commit: httpd-2.0 STATUS
Date Sat, 12 Jan 2002 06:22:11 GMT
I just tried to debug this with the current CVS HEAD on
Solaris, but I can't reproduce the crash.  The test case
that I tried was: set the httpd's ulimit on file descriptors
to a small number and run ab to generate enough concurrent
requests to exceed the fd limit.  Is there a better test case
that will trigger the segfault?


Jeff Trawick wrote:

> writes:
>>brianp      02/01/11 00:07:07
>>  Modified:    .        STATUS
>>  Log:
>>  Updated STATUS to cover the worker segfault fixes
>>  -    * The worker MPM on Solaris segfaults when it runs out of file
>>  -      descriptors.  (This may affect other MPMs and/or platforms.)
>I can still readily hit this on current code (the same code that no
>longer segfaults with graceful restart).
>[Fri Jan 11 07:26:37 2002] [error] (24)Too many open files:
>apr_accept: (client socket)
>[Fri Jan 11 07:26:37 2002] [error] [client] (24)Too many
>open files: file permissions deny server access: /exp
>[Fri Jan 11 07:26:37 2002] [error] [client] (24)Too many
>open files: cannot access type map file: /export/home
>[Fri Jan 11 07:26:38 2002] [notice] child pid 25493 exit signal
>Segmentation fault (11), possible coredump in /export/ho
>This is the same coredump I saw before:
>#0  0xff33a3cc in apr_wait_for_io_or_timeout (sock=0x738360,
> for_read=1) at sendrecv.c:70
>70              FD_SET(sock->socketdes, &fdset);
>The socket has already been closed so trying to set bit -1 segfaults.

View raw message