hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jake Low (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-9714) Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist
Date Wed, 27 Jan 2016 22:14:40 GMT

     [ https://issues.apache.org/jira/browse/HDFS-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jake Low updated HDFS-9714:
---------------------------
    Description: 
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke the semantics
of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was resolvable
(i.e. each ancestor directory was executable to the user and therefore could be traversed)
but which itself did not exist, the Namenode would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path resolutions at different
phases of a rename operation had the side effect of breaking this behavior. In 2.7.0 and above,
the Namenode instead raises the following:

{noformat}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: Permission denied: user=nobody,
access=WRITE, inode="/foo":hdfs:supergroup:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
        at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
        at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

{noformat}

Note the `hdfs:supergroup:drwxr-xr-x` in the error string. {{/foo}} doesn't exist, so it of
course has no owner, group or mode bits. The information shown above is actually the ownership
and access rights of {{/}}, which would be {{/foo}}'s parent if it existed.

  was:
It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke the semantics
of the {{rename}} and {{rename2}} RPCs.

Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was resolvable
(i.e. each ancestor directory was executable to the user and therefore could be traversed)
but which itself did not exist, the Namenode would reply with a {{FileNotFoundException}}.

The refactoring that took place in HDFS-7509 to avoid duplicate path resolutions at different
phases of a rename operation had the side effect of breaking this behavior. In 2.7.0 and above,
the Namenode instead raises the following:

{code}
org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: Permission denied: user=nobody,
access=WRITE, inode="/foo":hdfs:supergroup:drwxr-xr-x
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
        at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
        at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

{code}

Note the `hdfs:supergroup:drwxr-xr-x` in the error string. {{/foo}} doesn't exist, so it of
course has no owner, group or mode bits. The information shown above is actually the ownership
and access rights of {{/}}, which would be {{/foo}}'s parent if it existed.


> Rename throws AccessControlException (not FileNotFoundException) when src doesn't exist
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-9714
>                 URL: https://issues.apache.org/jira/browse/HDFS-9714
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.0, 2.7.1, 2.7.2
>            Reporter: Jake Low
>
> It looks like [HDFS-7509|https://issues.apache.org/jira/browse/HDFS-7509] broke the semantics
of the {{rename}} and {{rename2}} RPCs.
> Prior to 2.7.0, calling {{rename}} or {{rename2}} with a {{src}} path that was resolvable
(i.e. each ancestor directory was executable to the user and therefore could be traversed)
but which itself did not exist, the Namenode would reply with a {{FileNotFoundException}}.
> The refactoring that took place in HDFS-7509 to avoid duplicate path resolutions at different
phases of a rename operation had the side effect of breaking this behavior. In 2.7.0 and above,
the Namenode instead raises the following:
> {noformat}
> org.apache.hadoop.security.AccessControlException: ERROR_APPLICATION: Permission denied:
user=nobody, access=WRITE, inode="/foo":hdfs:supergroup:drwxr-xr-x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:216)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameTo(FSDirRenameOp.java:459)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirRenameOp.renameToInt(FSDirRenameOp.java:73)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:3611)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:864)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:575)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> {noformat}
> Note the `hdfs:supergroup:drwxr-xr-x` in the error string. {{/foo}} doesn't exist, so
it of course has no owner, group or mode bits. The information shown above is actually the
ownership and access rights of {{/}}, which would be {{/foo}}'s parent if it existed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message