hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Charles Lamb (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6422) getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't exist
Date Mon, 21 Jul 2014 21:00:41 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Charles Lamb updated HDFS-6422:

    Attachment: HDFS-6422.007.patch

bq. logAuditEvent(false, "getXAttr", src); --> logAuditEvent(false, "getXAttrs", src);


    } else {
    +        throw new IOException("No matching attributes found");
Changed to "No matching attributes found for remove operation"

bq.  And this condition makes me to think on retryCache. I hope it is done here let me see
there.  Example first call may succeed internally but restarted/disconnected, in that case
idempotent API will be retried from client. So, next call may fail as it was already removed.
Do you think, we need to mark this as ATMostOnce?

Good catch. You're right that my changes require removeXAttr to
become AtMostOnce. I've changed the code to reflect that.

bq. I think below exception message can be refined something like "Some/all attributes does
not match to get"?

I've chnages this to "At least one of the attributes provided was not found."


bq. From the below code, we don't need out.toString as we did not asserted anything.


bq. We need to shutdown the mini cluster as well.



bq. Please handle only specific exceptions. If it throws unexpected exception let it throwout,
we need not assert and throw.

All of this is due to WebHDFS throwing a different exception from the regular path. WebHDFS
throws a RemoteException which wraps a HadoopIllegalArgumentException. In other words, the
WebHDFS client does not unwrap the exception. You'll see in the diff that I've changed the
exception handling to catch both RemoteException and HadoopIllegalArgumentException. In the
former case, I check to see that the underlying exception is a HIAE.


    private static Domain DOMAIN = new Domain(NAME,
    +      Pattern.compile(".*"));

bq. I understand that we try to eliminate the client validation as we will not have flexibility
to add more namespaces in future. But that pattern can be same as <Namespace>. right.
So, how about validating pattern? Please check with Andrew as well what he says. But I have
no strong feeling on that. It is a suggestion.

I understand your concern. The problem is that WebHDFS would then be doing client side checking
and the exception would be generated and thrown from two different places. We wanted to unify
all of the xattr Namespace checking into one place on the server side so that there would
only be one place where the exception would be generated. I talked to Andrew and he's ok with
leaving it like it is in the patch.

> getfattr in CLI doesn't throw exception or return non-0 return code when xattr doesn't
> --------------------------------------------------------------------------------------------
>                 Key: HDFS-6422
>                 URL: https://issues.apache.org/jira/browse/HDFS-6422
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.5.0
>            Reporter: Charles Lamb
>            Assignee: Charles Lamb
>            Priority: Blocker
>         Attachments: HDFS-6422.005.patch, HDFS-6422.006.patch, HDFS-6422.007.patch, HDFS-6422.1.patch,
HDFS-6422.2.patch, HDFS-6422.3.patch, HDFS-6474.4.patch
> If you do
> hdfs dfs -getfattr -n user.blah /foo
> and user.blah doesn't exist, the command prints
> # file: /foo
> and a 0 return code.
> It should print an exception and return a non-0 return code instead.

This message was sent by Atlassian JIRA

View raw message