hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sailesh Mukil (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11529) Add libHDFS API to return last exception
Date Tue, 18 Apr 2017 20:57:42 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973488#comment-15973488

Sailesh Mukil commented on HDFS-11529:

Thanks [~jzhuge]!
#149: Why rename? Doesn’t the function still print it? This touches off so many one-line
This was a comment left by Colin. The reason for the name change is that it now does more
than just print the exception. It also handles it by updating the TLS, so leaving the name
as printExceptionAndFree() would have been a little misleading.

#178: Double space

#23: Please update the javadoc:

#168,169: Even though other places have done it, but I still find double-underscore prefix
wrong because variables with double-underscore prefix are supposed to be reserved.

#168-181: Replace “str” and “substr” in macro body with “(str)” and “(substr)”

#32: Any reason to remove the javadoc?
Looks like I removed it by mistake. I've added it back now.

I tried the patch on Centos 7.2. Unit tests passed.
Any testing done on Windows?
Unfortunately I do not have a Windows setup and I've been told by a PMC member that Windows
is not supported. Also, there was a bug in there (on trunk) for Windows that would have made
it unusable, which means that no one is using it, looks like. I just modified the code to
work with the new changes as best effort.

> Add libHDFS API to return last exception
> ----------------------------------------
>                 Key: HDFS-11529
>                 URL: https://issues.apache.org/jira/browse/HDFS-11529
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: libhdfs
>    Affects Versions: 2.6.0
>            Reporter: Sailesh Mukil
>            Assignee: Sailesh Mukil
>            Priority: Critical
>              Labels: errorhandling, libhdfs
>         Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, HDFS-11529.002.patch,
> libHDFS uses a table to compare exceptions against and returns a corresponding error
code to the application in case of an error.
> However, this table is manually populated and many times is disremembered when new exceptions
are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever these exceptions
are hit. These are some examples of exceptions that have been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not supported in
state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid
credentials provided (Mechanism level: Failed to find any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of exception,
so one suggestion of how this can be addressed is by having a call such as hdfsGetLastException()
that would return the last exception that a libHDFS thread encountered. This way, an application
may choose to call hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, this makes
sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message