httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gf b <gfb3...@gmail.com>
Subject [users@httpd] Apache not letting child process die
Date Mon, 30 Mar 2009 23:10:47 GMT
Hi everyone.
I have a Perl CGI script, which runs on Linux, and whose execution involves
forking a child process.  For clarity, I'll refer to this child process as
C, and to the original process as P.  After the forking, process C goes off
to perform a lengthy calculation and cache the results.  Process P, on the
other hand, prints a response that includes a job id and exits immediately.
 (This is very fast; the parent process exits a few milliseconds after the
forking happens.)  The client that receives this response extracts the job
id from it, and then checks periodically with the CGI script, providing this
job id, to eventually receive the results computed and cached by C.

Until recently, this script was performing flawlessly as described above,
but now, the client keeps the connection alive until C terminates.  This
makes it impossible for the client to provide any feedback to the user
during the time that the child is performing its lengthy calculation.

When I examine the processes during the script's execution (using
/usr/bin/top), I see that C is running, as expected, with the root process
(pid=1) as its current parent, which is also to be expected.  But I also see
that P is listed as "defunct", with a state code of Z, for "zombie".

Here's the relevant part of the Perl CGI script:

    my $job_id = new_job_id();
    $SIG{ CHLD } = 'IGNORE';
    die "fork failed: $!" unless defined( my $pid = fork );

    if ( $pid ) {
      # the parent branch (process P)
      $| = 1;  # unbuffer output
      print "Connection: close\r\n";
      print "Content-type: application/json\r\n\r\n";
      ###exit;
      print qq({"error":null,"id":0,"result":"$job_id"});
      close STDOUT;
      close STDERR;
    }
    else {
      # the child branch (process C)
      close STDOUT;
      close STDERR;
      compute_and_cache_results( $job_id );
    }

    exit;

When I invoke the script from the Unix command line via HTTP (using wget), I
instantly see the output (containing the job id) printed to my terminal, as
expected, but the wget process does not terminate; i.e. the connection
remains alive, even though the CGI script has exited, and the response
includes a "Connection: close" header.

My first guess is that the problem has to do with the lingering zombie P
process, but I'm not entirely sure about this.  The reason for my
uncertainty is that if I uncomment the line that says "###exit" above, now
the client-side wget command terminates instantly, even though, server-side,
the P process still shows as a zombie for a long time after this.  In other
words, the client keeps the connection alive only if the server-side script
generates any output after the HTTP headers.

What does the script need to do to tell Apache that it is completely and
unequivocally finished, and that the connection should be closed?

Also, what Apache configuration parameters would affect Apache's behavior in
this regard?  The version of Apache I'm using is 2.2.3.

More generally, how can I troubleshoot this problem?

TIA!

gfb322b

Mime
View raw message