httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject CGIs not killed off when client closes socket
Date Thu, 06 Mar 1997 09:12:31 GMT
I looked into what's going on when CGIs aren't killed off after the client
closes the socket.  It looks like the problem is that we're not even
looking to see if the client has closed the socket.  To do that we'd have
to run select() with the client socket in the read or write set regularly
(either should do).  SIGPIPE isn't delivered until the server tries to do
something with the client's socket. 

So I looked through send_fd_length(), which is where something like this
would have to be done.  You'd have to select() on the client socket and on
the fd you're supposed to be sending.  Unfortunately it uses FILE *... and
we're stuck with that right now, a patch to change that would be huge. 

But for most every call to send_fd/send_fd_length we could setvbuf the
FILE * to unbuffered input, and then be safe using select() on the fd.
(And also be faster in pretty much all cases except ranged responses where
having a buffer helps if a bunch of close together ranges are requested.) 
You have to call setvbuf on a FILE * before you do any i/o on it. 

Someone want to comment on the validity of this analysis?  Do we want a
patch for this in 1.2?  Do we want a restricted version of the above that
makes a whole new send_fd_cgi that avoids affecting other methods as much
as possible...

Dean


Mime
View raw message