A null pointer will cause a segmentation fault.  Is it guaranteed that the memory page that the pointer 0xdeadbeef points to is one that is not marked readable & writable, so that one would get a segmentation fault on reading/writing to it?

However, the functions still need to change to return errors for the cases when they are unbuffered files.  The buffer won't be referenced (and cause a crash) in those cases.

I think the read/write functions should also do *nbytes=0 when filedes<0 so caller's loops will stop looping. 

In addition, apr_file_eof should return an error APR_EBADF:

APR_DECLARE(apr_status_t) apr_file_eof(apr_file_t *fptr)
{
+    if (fptr->filedes < 0) {
+        return APR_EBADF;
+    }
    if (fptr->eof_hit == 1) {
        return APR_EOF;
    }
    return APR_SUCCESS;
}



On Tue, Dec 3, 2013 at 8:41 AM, Stefan Ruppert <sr@myarm.com> wrote:
Am 03.12.2013 14:18, schrieb Jeff Trawick:
On Tue, Dec 3, 2013 at 3:16 AM, Stefan Ruppert <sr@myarm.com
<mailto:sr@myarm.com>> wrote:

    Am 03.12.2013 00:37, schrieb William A. Rowe Jr.:

        On Mon, 02 Dec 2013 01:34:58 +0100
        Branko Čibej <brane@apache.org <mailto:brane@apache.org>> wrote:

            On 02.12.2013 01:29, Eric Covener wrote:

                I am looking at a httpd bug that causes a hang on
                windows but
                succeeds on unix.

                It seems that (short) files are opened w/ buffering, read,
                apr_file_closed, and read again [succesfully on unix]

                On Unix, they sare satisfied out of the buffer.
                  file->fileset is
                -1. On Windows, the destroyed apr_thread_mutex causes a
                hang.

                Is reading from the closed file on the extreme bogus end
                of the
                spectrum as I suspect and just getting lucky on the unix
                case?


            I'd certainly call a successful read from a closed file a bug.

                Should they blow up in 2.0 during a read if they've been
                closed?


            Dunno ... my gut feeling in cases like this is to just leave
            it be.
            Developers should have some fun finding heisenbugs, too. :)


        If we fix this, it needs to be minimal impact.  Zero'ing out the
        buffer on close seems like the lowest cpu impact.


    A minimal impact and IMHO the correct fix is to return an error in
    buffered I/O if the file was closed.

    An application should not call a read or write function on a closed
    file. If it does its a bug and it should be notified about that fact.


I keep telling myself over the years that APR's propensity to crash when
called in situations as wrong as this (quite different from proprietary
libraries with other considerations) is actually very good for code
quality in the long run.

What's the least we can do for this case that avoids having to check
good calls for validity and yet give unmistakable feedback?  Clear the
buffer pointer during close?

You are absolutly right, after a close the file structure isn't valid anymore thus checking the file descriptor is a shot in the dark...

Maybe not clearing the buffer pointer but assigning 0xdeadbeef to force a crash?

Regards,
Stefan