hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions
Date Wed, 07 May 2014 00:30:22 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991406#comment-13991406
] 

Hadoop QA commented on HDFS-4913:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12643626/HDFS-4913.003.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified
tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/6839//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6839//console

This message is automatically generated.

> Deleting file through fuse-dfs when using trash fails requiring root permissions
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-4913
>                 URL: https://issues.apache.org/jira/browse/HDFS-4913
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fuse-dfs
>    Affects Versions: 2.0.3-alpha
>            Reporter: Stephen Chu
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at _/user/testuser/testFile1_.
As the same user, I try to rm the file and run into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir /user/root/.Trash,
which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into /user/testuser/.Trash
instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: user=testuser,
access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
> 						   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
> 						   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
> 						   at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
> 						   at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
> 						   at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
> 						   at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 						   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 						   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
> 						   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
> 						   at java.security.AccessController.doPrivileged(Native Method)
> 						   at javax.security.auth.Subject.doAs(Subject.java:396)
> 						   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 						   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
> 						   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 						   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 						   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 						   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 						   at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 						   at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 						   at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153)
> 						   at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122)
> 						   at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:545)
> 						   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>        at $Proxy9.mkdirs(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>        at $Proxy9.mkdirs(Unknown Source)
>        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:426)
>        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2151)
>        ... 3 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message