hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andy Isaacson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3579) libhdfs: fix exception handling
Date Tue, 31 Jul 2012 22:05:35 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13426166#comment-13426166
] 

Andy Isaacson commented on HDFS-3579:
-------------------------------------

{code}
...
+    jvalue jVal;
+
+    jthrowable jthr = classNameOfObject(exc, env, &className);
+    if (jthr) {
{code}
the blank line should be after the declaration of jthr.
{code}
+        fprintf(stderr, "PrintExceptionAndFree: error determining class name "
+            "of exception!");
{code}
add a \n to this printf.  If it were me I would replace all the surprised "!"s with "." but
I am not going to insist on unexcitifying the error messages.
{code}
+        fprintf(stderr, " error: (no exception)");
{code}
another missing \n.
{code}
-        constructNewObjectOfClass(env, NULL, "org/apache/hadoop/fs/Path",
+    jthr = constructNewObjectOfClass(env, &jPath, "org/apache/hadoop/fs/Path",
                                   "(Ljava/lang/String;)V", jPathString);
{code}
indent the continuation line to match the (.
{code}
+    jthr = newCStr(env, jRet, val);
+    if (jthr)
         goto done;
 done:
{code}
having a "goto done" on the line before "done:" is a bit confusing.  I'd just drop it.
{code}
--- hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h
+++ hadoop-hdfs-project/hadoop-hdfs/src/main/native/libhdfs/jni_helper.h
...
+void destroyLocalReference(JNIEnv *env, jobject jObject);
{code}
Since this is external linkage C API, I think we need to use a reserved namespace prefix to
avoid breaking outside code.  So, make this hdfsDestroyLocalReference perhaps?  This is a
large change, and this is newly exposed code, so that change can be done in a separate jira.

An alternate fix, if we only support a .so and don't ship a .a, is to use a linker script
to control symbol visibility.

But one way or another we need to make sure we have sanitary symbol table usage.

{code}
+    if (jthr) {
+        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+            "hdfsDisconnect: org.apache.hadoop.fs.FileSystem::close");
{code}
I don't think :: is right for Java?  I think it should be "org.apache.hadoop.fs.FileSystem#close".

Please check all the printExceptionAndFree calls and verify that they are consistent, I see
some with and some without the org.class.path.  I don't care what standard, but I think "cFuncName:
org.class.path#method" is the right string unless there's a better idea.

{code}
     //bufferSize
     if (!bufferSize) {
{code}
Please delete this useless comment. (Not your code, but you're editing right here, let's just
fix it.)

{code}
     }  else if ((flags & O_WRONLY) && (flags & O_APPEND)) {
+        // WRITE/APPEND?
+       jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,
{code}
There's a bunch of funky whitespace here, let's fix it.  "}  else" has an extra space, and
I think the "// WRITE" is indented one space too far.
{code}
+    file = calloc(1, sizeof(struct hdfsFile_internal));
{code}
I find {{file = calloc(1, sizeof *file);}} easier to convince myself of, but I don't know
if we have a local style guideline on this point?
{code}
+    if ((flags & O_WRONLY) == 0) {
{code}
This is wrong per HDFS-3710. (Many occurrences of this pattern.)  No need to fix the existing
ones outside of the code you're touching, but please don't add more.

more to come in next comment...
                
> libhdfs: fix exception handling
> -------------------------------
>
>                 Key: HDFS-3579
>                 URL: https://issues.apache.org/jira/browse/HDFS-3579
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: libhdfs
>    Affects Versions: 2.0.1-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-3579.004.patch, HDFS-3579.005.patch
>
>
> libhdfs does not consistently handle exceptions.  Sometimes we don't free the memory
associated with them (memory leak).  Sometimes we invoke JNI functions that are not supposed
to be invoked when an exception is active.
> Running a libhdfs test program with -Xcheck:jni shows the latter problem clearly:
> {code}
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> WARNING in native method: JNI call made with exception pending
> Exception in thread "main" java.io.IOException: ...
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message