Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 18B5DC857 for ; Tue, 18 Jun 2013 22:25:16 +0000 (UTC) Received: (qmail 90416 invoked by uid 500); 18 Jun 2013 22:25:10 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 90275 invoked by uid 500); 18 Jun 2013 22:25:10 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 90268 invoked by uid 99); 18 Jun 2013 22:25:10 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Jun 2013 22:25:10 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of prash1784@gmail.com designates 209.85.215.53 as permitted sender) Received: from [209.85.215.53] (HELO mail-la0-f53.google.com) (209.85.215.53) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Jun 2013 22:25:03 +0000 Received: by mail-la0-f53.google.com with SMTP id fs12so4045447lab.12 for ; Tue, 18 Jun 2013 15:24:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=lmEFps8mP9w1N8Btq/gOIgdaxAMzKBJyXx2kt8NE5Zw=; b=tQutRM+arIllAVt0i6TEyQjw6lB3Ej8DxA8d62/PWdfRH8tUTF1eHiXbk4QyNs0rVd 0AFdqMqJYhAAr2Q+X+ZdWBSCOpvL2GqEYX9pDIERR1I+GpDJmU9+HRTzhYDXnsdVzY7H tbn7G7Mh+HD1xQ486l5hjiZ+u+F2Wg6g2Nfseu1pDRNUsJ846pWaptgHHPhHjpc8wm5c QfnwqJPqXkZ55B1AJOTu/FWj2A3rb4tmzcBd/JuAaOI8bFWzUIhQ2H5kCgLHxj07I+1K +J4marXg+t4isDLa8hf1oBLWgDmzBD0xV7Mpc0g/itKcArKKW+MZCCPXzu1MXyu2cZm2 /a7Q== MIME-Version: 1.0 X-Received: by 10.112.3.195 with SMTP id e3mr1996000lbe.54.1371594283089; Tue, 18 Jun 2013 15:24:43 -0700 (PDT) Received: by 10.114.15.198 with HTTP; Tue, 18 Jun 2013 15:24:43 -0700 (PDT) In-Reply-To: References: <1C40D33AEC9DCA40A06F4637B6E58ABF46ABBE32@LAX-EX-MB2.datadirect.datadirectnet.com> Date: Tue, 18 Jun 2013 15:24:43 -0700 Message-ID: Subject: Re: DFS Permissions on Hadoop 2.x From: Prashant Kommireddi To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=14dae94ee1bf05068004df752e80 X-Virus-Checked: Checked by ClamAV on apache.org --14dae94ee1bf05068004df752e80 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hi Chris, This is while running a MR job. Please note the job is able to write files to "/mapred" directory and fails on EXECUTE permissions. On digging in some more, it looks like the failure occurs after writing to "/mapred/history/done_intermediate". Here is a more detailed stacktrace. INFO: Job end notification started for jobID : job_1371593763906_0001 Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler closeEventWriter INFO: Unable to write out JobSummaryInfo to [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_13= 71593763906_0001.summary_tmp] org.apache.hadoop.security.AccessControlException: Permission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm= issionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers= e(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss= ion(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN= amesystem.java:4684) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy= stem.java:4640) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS= Namesystem.java:1134) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam= esystem.java:1111) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(= NameNodeRpcServer.java:454) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans= latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253= ) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie= ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4= 4074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(= ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1408) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructor= AccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCon= structorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcept= ion.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcep= tion.java:57) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedF= ileSystem.java:823) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEven= tWriter(JobHistoryEventHandler.java:666) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEve= nt(JobHistoryEventHandler.java:521) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(Job= HistoryEventHandler.java:273) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security= .AccessControlException): Permission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm= issionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers= e(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss= ion(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN= amesystem.java:4684) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy= stem.java:4640) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS= Namesystem.java:1134) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam= esystem.java:1111) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(= NameNodeRpcServer.java:454) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans= latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253= ) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie= ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4= 4074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(= ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1408) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) at org.apache.hadoop.ipc.Client.call(Client.java:1225) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngin= e.java:202) at $Proxy9.setPermission(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.se= tPermission(ClientNamenodeProtocolTranslatorPB.java:241) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja= va:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso= rImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv= ocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio= nHandler.java:83) at $Proxy10.setPermission(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895) ... 5 more Jun 18, 2013 3:20:20 PM org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException SEVERE: Thread Thread[Thread-51,5,main] threw an Exception. org.apache.hadoop.yarn.YarnException: org.apache.hadoop.security.AccessControlException: Permission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm= issionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers= e(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss= ion(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN= amesystem.java:4684) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy= stem.java:4640) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS= Namesystem.java:1134) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam= esystem.java:1111) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(= NameNodeRpcServer.java:454) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans= latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253= ) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie= ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4= 4074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(= ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1408) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEve= nt(JobHistoryEventHandler.java:523) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(Job= HistoryEventHandler.java:273) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm= issionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers= e(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss= ion(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN= amesystem.java:4684) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy= stem.java:4640) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS= Namesystem.java:1134) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam= esystem.java:1111) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(= NameNodeRpcServer.java:454) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans= latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253= ) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie= ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4= 4074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(= ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1408) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructor= AccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCon= structorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcept= ion.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcep= tion.java:57) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897) at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedF= ileSystem.java:823) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEven= tWriter(JobHistoryEventHandler.java:666) at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEve= nt(JobHistoryEventHandler.java:521) ... 2 more Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security= .AccessControlException): Permission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm= issionChecker.java:205) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers= e(FSPermissionChecker.java:161) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss= ion(FSPermissionChecker.java:128) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN= amesystem.java:4684) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy= stem.java:4640) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS= Namesystem.java:1134) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam= esystem.java:1111) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(= NameNodeRpcServer.java:454) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans= latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253= ) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie= ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4= 4074) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(= ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1408) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) at org.apache.hadoop.ipc.Client.call(Client.java:1225) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngin= e.java:202) at $Proxy9.setPermission(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.se= tPermission(ClientNamenodeProtocolTranslatorPB.java:241) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja= va:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso= rImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv= ocationHandler.java:164) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio= nHandler.java:83) at $Proxy10.setPermission(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895) ... 5 more Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1 Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources INFO: Received completed container container_1371593763906_0001_01_000003 Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1 Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInfor= mationUpdater transition INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0: Container killed by the ApplicationMaster. On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth wr= ote: > Prashant, can you provide more details about what you're doing when you > see this error? Are you submitting a MapReduce job, running an HDFS shel= l > command, or doing some other action? It's possible that we're also seein= g > an interaction with some other change in 2.x that triggers a setPermissio= n > call that wasn't there in 0.20.2. I think the problem with the HDFS > setPermission API is present in both 0.20.2 and 2.x, but if the code in > 0.20.2 never triggered a setPermission call for your usage, then you > wouldn't have seen the problem. > > I'd like to gather these details for submitting a new bug report to HDFS. > Thanks! > > Chris Nauroth > Hortonworks > http://hortonworks.com/ > > > > On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung wrote: > >> I believe, the properties name should be =93dfs.permissions=94**** >> >> ** ** >> >> ** ** >> >> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com] >> *Sent:* Tuesday, June 18, 2013 10:54 AM >> *To:* user@hadoop.apache.org >> *Subject:* DFS Permissions on Hadoop 2.x**** >> >> ** ** >> >> Hello,**** >> >> ** ** >> >> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a >> question around disabling dfs permissions on the latter version. For som= e >> reason, setting the following config does not seem to work**** >> >> ** ** >> >> **** >> >> dfs.permissions.enabled**** >> >> false**** >> >> **** >> >> ** ** >> >> Any other configs that might be needed for this? **** >> >> ** ** >> >> Here is the stacktrace. **** >> >> ** ** >> >> 2013-06-17 17:35:45,429 INFO ipc.Server - IPC Server handler 62 on 8020= , >> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from >> 10.0.53.131:24059: error: >> org.apache.hadoop.security.AccessControlException: Permission denied: >> user=3Dsmehta, access=3DEXECUTE, >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---**** >> >> org.apache.hadoop.security.AccessControlException: Permission denied: >> user=3Dsmehta, access=3DEXECUTE, >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---**** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermi= ssionChecker.java:205) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse= (FSPermissionChecker.java:161) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissi= on(FSPermissionChecker.java:128) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNa= mesystem.java:4684) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesys= tem.java:4640) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSN= amesystem.java:1134) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSName= system.java:1111) >> **** >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(N= ameNodeRpcServer.java:454) >> **** >> >> at >> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTransl= atorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253) >> **** >> >> at >> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clien= tNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44= 074) >> **** >> >> at >> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(P= rotobufRpcEngine.java:453) >> **** >> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)**** >> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)*= * >> ** >> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)*= * >> ** >> >> at java.security.AccessController.doPrivileged(Native Method)***= * >> >> at javax.security.auth.Subject.doAs(Subject.java:396)**** >> >> at >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformatio= n.java:1408) >> **** >> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)***= * >> >> ** ** >> >> ** ** >> >> ** ** >> >> ** ** >> > > --14dae94ee1bf05068004df752e80 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
Hi Chris,

This is while running a= MR job. Please note the job is able to write files to "/mapred" = directory and fails on EXECUTE permissions. On digging in some more, it loo= ks like the failure occurs after writing to "/mapred/history/done_inte= rmediate".

Here is a more detailed stacktrace.

INFO: Job end notification started for j=
obID : job_1371593763906_0001
Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.jobhistory.JobHistoryEv=
entHandler closeEventWriter
INFO: Unable to write out JobSummaryInfo to [hdfs://test-local-EMPTYSPEC/ma=
pred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
org.apache.hadoop.security.AccessControlException: Permission denied: user=
=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommireddi:superg=
roup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm=
issionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers=
e(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss=
ion(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN=
amesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy=
stem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS=
Namesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam=
esystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(=
NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans=
latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253=
)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie=
ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4=
4074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(=
ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati=
on.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructor=
AccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCon=
structorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcept=
ion.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcep=
tion.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedF=
ileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEven=
tWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEve=
nt(JobHistoryEventHandler.java:521)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(Job=
HistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security=
.AccessControlException): Permission denied: user=3Dsmehta, access=3DEXECUT=
E, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm=
issionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers=
e(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss=
ion(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN=
amesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy=
stem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS=
Namesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam=
esystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(=
NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans=
latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253=
)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie=
ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4=
4074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(=
ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati=
on.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngin=
e.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.se=
tPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja=
va:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso=
rImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv=
ocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio=
nHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM org.apache.hadoop.yarn.YarnUncaughtExceptionHandler=
 uncaughtException
SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
org.apache.hadoop.yarn.YarnException: org.apache.hadoop.security.AccessCont=
rolException: Permission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D&=
quot;/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm=
issionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers=
e(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss=
ion(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN=
amesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy=
stem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS=
Namesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam=
esystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(=
NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans=
latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253=
)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie=
ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4=
4074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(=
ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati=
on.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEve=
nt(JobHistoryEventHandler.java:523)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(Job=
HistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.security.AccessControlException: Permission de=
nied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommire=
ddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm=
issionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers=
e(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss=
ion(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN=
amesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy=
stem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS=
Namesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam=
esystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(=
NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans=
latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253=
)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie=
ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4=
4074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(=
ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati=
on.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructor=
AccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCon=
structorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcept=
ion.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExcep=
tion.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedF=
ileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEven=
tWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEve=
nt(JobHistoryEventHandler.java:521)
	... 2 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security=
.AccessControlException): Permission denied: user=3Dsmehta, access=3DEXECUT=
E, inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPerm=
issionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTravers=
e(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermiss=
ion(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSN=
amesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesy=
stem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FS=
Namesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNam=
esystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(=
NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTrans=
latorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253=
)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Clie=
ntNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4=
4074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(=
ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati=
on.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngin=
e.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.se=
tPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.ja=
va:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccesso=
rImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInv=
ocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocatio=
nHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAl=
locator$ScheduleStats log
INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 Assi=
gnedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContR=
el:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAl=
locator getResources
INFO: Received completed container container_1371593763906_0001_01_000003
Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAl=
locator$ScheduleStats log
INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 Assig=
nedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRe=
l:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAtt=
emptImpl$DiagnosticInformationUpdater transition
INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0: Contai=
ner killed by the ApplicationMaster.


On Tue, Jun 18, 2013 at 1:28 PM, C= hris Nauroth <cnauroth@hortonworks.com> wrote:
Prashant, can you provide m= ore details about what you're doing when you see this error? =A0Are you= submitting a MapReduce job, running an HDFS shell command, or doing some o= ther action? =A0It's possible that we're also seeing an interaction= with some other change in 2.x that triggers a setPermission call that wasn= 't there in 0.20.2. =A0I think the problem with the HDFS setPermission = API is present in both 0.20.2 and 2.x, but if the code in 0.20.2 never trig= gered a setPermission call for your usage, then you wouldn't have seen = the problem.

I'd like to gather these details for submitting a new bu= g report to HDFS. =A0Thanks!

Chris Nauroth
Hortonworks



On Tue, Jun= 18, 2013 at 12:14 PM, Leo Leung <lleung@ddn.com> wrote:

I believe, the properties= name should be =93dfs.permissions=94

=A0<= /p>

=A0<= /p>

From: Prashant= Kommireddi [mailto:prash1784@gmail.com]
Sent: Tuesday, June 18, 2013 10:54 AM
To: user= @hadoop.apache.org
Subject: DFS Permissions on Hadoop 2.x

=

=A0

Hello,

=A0

We just upgraded our cluster from 0.20.2 to 2.x (wit= h HA) and had a question around disabling dfs permissions on the latter ver= sion. For some reason, setting the following config does not seem to work

=A0

<property>

=A0 =A0 =A0 =A0 <name>dfs.permissions.enabled&= lt;/name>

=A0 =A0 =A0 =A0 <value>false</value><= /u>

</property>

=A0

Any other configs that might be needed for this?=A0<= u>

=A0

Here is the stacktrace.=A0

=A0

2013-06-17 17:35:45,429 INFO =A0ipc.Server - IPC Ser= ver handler 62 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol= .setPermission from 10.0.53.131:24059: error: org.apache.hadoop.security.AccessControlException: Permission den= ied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred":pkommired= di:supergroup:drwxrwx---

org.apache.hadoop.security.AccessControlException: P= ermission denied: user=3Dsmehta, access=3DEXECUTE, inode=3D"/mapred&qu= ot;:pkommireddi:supergroup:drwxrwx---

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSPermissionChecker.check(FSPermissionChecker.java:205)=

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)<= /u>

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSNamesystem.checkPermission(FSNamesystem.java:4684)

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSNamesystem.checkOwner(FSNamesystem.java:4640)

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.FSNamesystem.setPermission(FSNamesystem.java:1111)

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server.nam= enode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)=

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.protocolPB= .ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeP= rotocolServerSideTranslatorPB.java:253)

=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.protocol.p= roto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMeth= od(ClientNamenodeProtocolProtos.java:44074)

=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.ProtobufRpc= Engine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)=

=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Server.= call(RPC.java:1002)

=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Hand= ler$1.run(Server.java:1695)

=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Hand= ler$1.run(Server.java:1691)

=A0 =A0 =A0 =A0 at java.security.AccessController.do= Privileged(Native Method)

=A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(= Subject.java:396)

=A0 =A0 =A0 =A0 at org.apache.hadoop.security.UserGr= oupInformation.doAs(UserGroupInformation.java:1408)

=A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Hand= ler.run(Server.java:1689)

=A0

=A0

=A0

=A0



--14dae94ee1bf05068004df752e80--