Am 03.12.2013 14:18, schrieb Jeff Trawick:
On Tue, Dec 3, 2013 at 3:16 AM, Stefan Ruppert <firstname.lastname@example.org<mailto:email@example.com>> wrote:
Am 03.12.2013 00:37, schrieb William A. Rowe Jr.:
On Mon, 02 Dec 2013 01:34:58 +0100Branko Čibej <firstname.lastname@example.org <mailto:email@example.com>> wrote:
On 02.12.2013 01:29, Eric Covener wrote:
I am looking at a httpd bug that causes a hang on
succeeds on unix.
It seems that (short) files are opened w/ buffering, read,
apr_file_closed, and read again [succesfully on unix]
On Unix, they sare satisfied out of the buffer.
-1. On Windows, the destroyed apr_thread_mutex causes a
Is reading from the closed file on the extreme bogus end
spectrum as I suspect and just getting lucky on the unix
I'd certainly call a successful read from a closed file a bug.
Should they blow up in 2.0 during a read if they've been
Dunno ... my gut feeling in cases like this is to just leave
Developers should have some fun finding heisenbugs, too. :)
If we fix this, it needs to be minimal impact. Zero'ing out the
buffer on close seems like the lowest cpu impact.
A minimal impact and IMHO the correct fix is to return an error in
buffered I/O if the file was closed.
An application should not call a read or write function on a closed
file. If it does its a bug and it should be notified about that fact.
I keep telling myself over the years that APR's propensity to crash when
called in situations as wrong as this (quite different from proprietary
libraries with other considerations) is actually very good for code
quality in the long run.
What's the least we can do for this case that avoids having to check
good calls for validity and yet give unmistakable feedback? Clear the
buffer pointer during close?
You are absolutly right, after a close the file structure isn't valid anymore thus checking the file descriptor is a shot in the dark...
Maybe not clearing the buffer pointer but assigning 0xdeadbeef to force a crash?