hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3579) libhdfs: fix exception handling
Date Wed, 01 Aug 2012 02:43:34 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13426291#comment-13426291
] 

Colin Patrick McCabe commented on HDFS-3579:
--------------------------------------------

bq. I find these JAVA EQUIVALENT comments to be very helpful, could we keep them around? so
long as they're accurate, I mean. If they're misleading then deleting is correct.

ok.

bq. Please use interface here rather than recapitulating its ternary.

It was done to avoid printing out "org/apache/hadoop/fs/FSDataInputStream" etc. since- as
you commeted above-- it's nicer to print something shorter.  I don't have a strong feeling
about it either way, but I suspect it's easier just to redo the ternary here.

bq. Is this correct? FSDataInputStream#read returns -1 on EOF and 0 on EINTR? That's special.
I see docs for the -1 case, but I don't see anywhere that the 0 could come from?

The standard Java convention is that -1 means EOF, and 0 is just a short read.  hdfsRead,
on the other hand, follows the UNIX convention.  See HADOOP-1582 for more details.

bq. Let's preserve the interface that hdfsFile file can be NULL without causing SEGV. Just
toss in a if (file == NULL) return -1; near the top.

The whole thing is kind of messy.  Passing a NULL pointer to hdfsClose is a user error, yet
we check for it for some reason.

There's no way to actually *get* an hdfsFile which has file->type UNINITIALIZED.  Every
hdfsOpen path either leads to returning null, or returning a file of type INPUT or OUTPUT.
 There's no way to close a file that hasn't been opened either.  Similarly, there's no way
to get a file where file->file is NULL.

The original code didn't check for file->file being NULL either (look at it carefully,
you'll see what I mean).

tl;dr:  I didn't change the behavior here.  But someone should eventually.
                
> libhdfs: fix exception handling
> -------------------------------
>
>                 Key: HDFS-3579
>                 URL: https://issues.apache.org/jira/browse/HDFS-3579
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: libhdfs
>    Affects Versions: 2.0.1-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-3579.004.patch, HDFS-3579.005.patch, HDFS-3579.006.patch
>
>
> libhdfs does not consistently handle exceptions.  Sometimes we don't free the memory
associated with them (memory leak).  Sometimes we invoke JNI functions that are not supposed
to be invoked when an exception is active.
> Running a libhdfs test program with -Xcheck:jni shows the latter problem clearly:
> {code}
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> Exception in thread "main" java.io.IOException: ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message