hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Nauroth <cnaur...@hortonworks.com>
Subject Re: DFS Permissions on Hadoop 2.x
Date Wed, 19 Jun 2013 20:01:12 GMT
Just in case anyone is curious who didn't look at HDFS-4918, we established
that this is actually expected behavior, and it's mentioned in the
documentation.  However, I filed HDFS-4919 to make the information clearer
in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi
<prash1784@gmail.com>wrote:

> Thanks guys, I will follow the discussion there.
>
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <azuryyyu@gmail.com> wrote:
>
>> Yes, and I think this was lead by Snapshot.
>>
>> I've file a JIRA here:
>> https://issues.apache.org/jira/browse/HDFS-4918
>>
>>
>>
>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <harsh@cloudera.com> wrote:
>>
>>> This is a HDFS bug. Like all other methods that check for permissions
>>> being enabled, the client call of setPermission should check it as
>>> well. It does not do that currently and I believe it should be a NOP
>>> in such a case. Please do file a JIRA (and reference the ID here to
>>> close the loop)!
>>>
>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>> <prash1784@gmail.com> wrote:
>>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>>> > Failure occurs while setting perms on "intermediate done directory".
>>> Here is
>>> > what I think is happening:
>>> >
>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>> (for eg,
>>> > /mapred/history/done_intermediate)
>>> >
>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>> done dir
>>> > (/mapred/history/done_intermediate/username)
>>> >
>>> > 3. After this dir is created, the code tries to set permissions on
>>> this user
>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>> parent
>>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>>> > level (/mapred). This fails as "/mapred" does not have execute
>>> permissions
>>> > for the "Other" users.
>>> >
>>> > 4. On successive job runs, since the user dir already exists
>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>> create
>>> > and set permissions again. And the job completes without any perm
>>> errors.
>>> >
>>> > This is the code within JobHistoryEventHandler that's doing it.
>>> >
>>> >  //Check for the existence of intermediate done dir.
>>> >     Path doneDirPath = null;
>>> >     try {
>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>> > Path(doneDirStr));
>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>> >       // This directory will be in a common location, or this may be a
>>> > cluster
>>> >       // meant for a single user. Creating based on the conf. Should
>>> ideally
>>> > be
>>> >       // created by the JobHistoryServer or as part of deployment.
>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>> >         LOG.info("Creating intermediate history logDir: ["
>>> >             + doneDirPath
>>> >             + "] + based on conf. Should ideally be created by the
>>> > JobHistoryServer: "
>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>> >           mkdir(
>>> >               doneDirFS,
>>> >               doneDirPath,
>>> >               new FsPermission(
>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>> >                 .toShort()));
>>> >           // TODO Temporary toShort till new
>>> FsPermission(FsPermissions)
>>> >           // respects
>>> >         // sticky
>>> >       } else {
>>> >           String message = "Not creating intermediate history logDir:
>>> ["
>>> >                 + doneDirPath
>>> >                 + "] based on conf: "
>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>> >                 + ". Either set to true or pre-create this directory
>>> with" +
>>> >                 " appropriate permissions";
>>> >         LOG.error(message);
>>> >         throw new YarnException(message);
>>> >       }
>>> >       }
>>> >     } catch (IOException e) {
>>> >       LOG.error("Failed checking for the existance of history
>>> intermediate "
>>> > +
>>> >                       "done directory: [" + doneDirPath + "]");
>>> >       throw new YarnException(e);
>>> >     }
>>> >
>>> >
>>> > In any case, this does not appear to be the right behavior as it does
>>> not
>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>> like a
>>> > bug?
>>> >
>>> >
>>> > Thanks, Prashant
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>> prash1784@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> This is while running a MR job. Please note the job is able to write
>>> files
>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>> in some
>>> >> more, it looks like the failure occurs after writing to
>>> >> "/mapred/history/done_intermediate".
>>> >>
>>> >> Here is a more detailed stacktrace.
>>> >>
>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> >> closeEventWriter
>>> >> INFO: Unable to write out JobSummaryInfo to
>>> >>
>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>> >> org.apache.hadoop.yarn.YarnException:
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>> Permission
>>> >> denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      ... 2 more
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>> getResources
>>> >> INFO: Received completed container
>>> container_1371593763906_0001_01_000003
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>> >> transition
>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>> >> Container killed by the ApplicationMaster.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>> cnauroth@hortonworks.com>
>>> >> wrote:
>>> >>>
>>> >>> Prashant, can you provide more details about what you're doing when
>>> you
>>> >>> see this error?  Are you submitting a MapReduce job, running an
HDFS
>>> shell
>>> >>> command, or doing some other action?  It's possible that we're also
>>> seeing
>>> >>> an interaction with some other change in 2.x that triggers a
>>> setPermission
>>> >>> call that wasn't there in 0.20.2.  I think the problem with the
HDFS
>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the
code
>>> in
>>> >>> 0.20.2 never triggered a setPermission call for your usage, then
you
>>> >>> wouldn't have seen the problem.
>>> >>>
>>> >>> I'd like to gather these details for submitting a new bug report
to
>>> HDFS.
>>> >>> Thanks!
>>> >>>
>>> >>> Chris Nauroth
>>> >>> Hortonworks
>>> >>> http://hortonworks.com/
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <lleung@ddn.com>
wrote:
>>> >>>>
>>> >>>> I believe, the properties name should be “dfs.permissions”
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>> >>>> To: user@hadoop.apache.org
>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Hello,
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and
had a
>>> >>>> question around disabling dfs permissions on the latter version.
>>> For some
>>> >>>> reason, setting the following config does not seem to work
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> <property>
>>> >>>>
>>> >>>>         <name>dfs.permissions.enabled</name>
>>> >>>>
>>> >>>>         <value>false</value>
>>> >>>>
>>> >>>> </property>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Any other configs that might be needed for this?
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Here is the stacktrace.
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler
62 on
>>> >>>> 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>> >>>> 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException:
>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>> denied:
>>> >>>> user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>>>
>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>>>
>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>> Method)
>>> >>>>
>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Mime
View raw message