httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Bannert <>
Subject Re: CGI bucket needed
Date Wed, 25 Sep 2002 04:38:52 GMT
On Tue, Sep 24, 2002 at 07:06:04PM -0700, Greg Stein wrote:
> Just ran into an interesting bug, and I've got a proposal for a way to solve
> it, too. (no code tho :-)
> If a CGI writes to stderr [more than the pipe's buffer has room for], then
> it will block on that write. Meanwhile, when Apache goes to deliver the CGI
> output to the network, it will *block* on a read from the CGI's output.
> See the deadlock yet? :-)
> The CGI can't generate output because it needs the write-to-stderr to
> complete. Apache can't drain stderr until the read-from-stdout completes. In
> fact, Apache won't even drain stderr until the CGI is *done* (it must empty
> the PIPE bucket passed into the output filters).
> Eventually, the deadlock resolves itself when the read from the PIPE bucket
> times out.
> [ this read behavior occurs in the C-L filter ]
> [ NOTE: it appears this behavior is a regression from Apache 1.3. In 1.3, we
>   just hook stderr into the error log. In 2.0, we manually read lines, then
>   log them (with timestamps) ]
> I believe the solution is to create a new CGI bucket type. The read()
> function would read from stdout, similar to a normal PIPE bucket (e.g.
> create a new HEAP bucket with the results). However, the bucket *also* holds
> the stderr pipe from the CGI script. When you do a bucket read(), it
> actually blocks on both pipes. If data comes in from stderr, then it drains
> it and sends that to the error log. Data that comes in from stdout is
> handled normally.

Yuck. I think if we had the ability to multiplex brigades then we'd
have a more elegant approach. There are other similiar possible
deadlocks with CGI that would all be solved with a general purpose


View raw message