apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeff Trawick <traw...@gmail.com>
Subject Re: buffered files read after apr_file_close()'ed
Date Tue, 03 Dec 2013 14:43:29 GMT
On Tue, Dec 3, 2013 at 9:15 AM, Daniel Lescohier
<daniel.lescohier@cbsi.com>wrote:

> A null pointer will cause a segmentation fault.  Is it guaranteed that the
> memory page that the pointer 0xdeadbeef points to is one that is not marked
> readable & writable, so that one would get a segmentation fault on
> reading/writing to it?
>
> However, the functions still need to change to return errors for the cases
> when they are unbuffered files.  The buffer won't be referenced (and cause
> a crash) in those cases.
>
> I think the read/write functions should also do *nbytes=0 when filedes<0
> so caller's loops will stop looping.
>

For unbuffered files they'll generally get rv == EBADF  for free, right?
 And apr_file_read() for Unix seems to set *nbytes = 0 if an error occurs??

>
> In addition, apr_file_eof should return an error APR_EBADF:
>
> APR_DECLARE(apr_status_t) apr_file_eof(apr_file_t *fptr)
> {
> +    if (fptr->filedes < 0) {
> +        return APR_EBADF;
> +    }
>     if (fptr->eof_hit == 1) {
>         return APR_EOF;
>     }
>     return APR_SUCCESS;
> }
>
>
(shrug)

* Cheap mechanisms in apr_FOO_close to help catch application bugs later
(e.g., invalidating pointers or file descriptors): awesome
* Adding sanity checking like this for application bugs: never-ending work
* Crashes due to application bugs: fine with me
* Negative feedback due to application bugs as long as we don't have to
check for the condition: awesome
* Hangs or incorrect/invalid feedback due to application bugs when
otherwise APR would require explicit code to check for it: fine with me

But then this particular mod_cache_disk bug doesn't seem so interesting to
me personally :)  (My own Mileage May Vary)

Maybe the extra sanity check is not in an API called over and over, so it
doesn't clutter the main path.  (not so bad)



>
> On Tue, Dec 3, 2013 at 8:41 AM, Stefan Ruppert <sr@myarm.com> wrote:
>
>> Am 03.12.2013 14:18, schrieb Jeff Trawick:
>>
>>> On Tue, Dec 3, 2013 at 3:16 AM, Stefan Ruppert <sr@myarm.com
>>> <mailto:sr@myarm.com>> wrote:
>>>
>>>     Am 03.12.2013 00:37, schrieb William A. Rowe Jr.:
>>>
>>>         On Mon, 02 Dec 2013 01:34:58 +0100
>>>         Branko ─îibej <brane@apache.org <mailto:brane@apache.org>>
wrote:
>>>
>>>             On 02.12.2013 01:29, Eric Covener wrote:
>>>
>>>                 I am looking at a httpd bug that causes a hang on
>>>                 windows but
>>>                 succeeds on unix.
>>>
>>>                 It seems that (short) files are opened w/ buffering,
>>> read,
>>>                 apr_file_closed, and read again [succesfully on unix]
>>>
>>>                 On Unix, they sare satisfied out of the buffer.
>>>                   file->fileset is
>>>                 -1. On Windows, the destroyed apr_thread_mutex causes a
>>>                 hang.
>>>
>>>                 Is reading from the closed file on the extreme bogus end
>>>                 of the
>>>                 spectrum as I suspect and just getting lucky on the unix
>>>                 case?
>>>
>>>
>>>             I'd certainly call a successful read from a closed file a
>>> bug.
>>>
>>>                 Should they blow up in 2.0 during a read if they've been
>>>                 closed?
>>>
>>>
>>>             Dunno ... my gut feeling in cases like this is to just leave
>>>             it be.
>>>             Developers should have some fun finding heisenbugs, too. :)
>>>
>>>
>>>         If we fix this, it needs to be minimal impact.  Zero'ing out the
>>>         buffer on close seems like the lowest cpu impact.
>>>
>>>
>>>     A minimal impact and IMHO the correct fix is to return an error in
>>>     buffered I/O if the file was closed.
>>>
>>>     An application should not call a read or write function on a closed
>>>     file. If it does its a bug and it should be notified about that fact.
>>>
>>>
>>> I keep telling myself over the years that APR's propensity to crash when
>>> called in situations as wrong as this (quite different from proprietary
>>> libraries with other considerations) is actually very good for code
>>> quality in the long run.
>>>
>>> What's the least we can do for this case that avoids having to check
>>> good calls for validity and yet give unmistakable feedback?  Clear the
>>> buffer pointer during close?
>>>
>>
>> You are absolutly right, after a close the file structure isn't valid
>> anymore thus checking the file descriptor is a shot in the dark...
>>
>> Maybe not clearing the buffer pointer but assigning 0xdeadbeef to force a
>> crash?
>>
>> Regards,
>> Stefan
>>
>>
>


-- 
Born in Roswell... married an alien...
http://emptyhammock.com/

Mime
View raw message