Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 41EF7C057 for ; Wed, 19 Jun 2013 20:32:02 +0000 (UTC) Received: (qmail 99398 invoked by uid 500); 19 Jun 2013 20:31:56 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 99310 invoked by uid 500); 19 Jun 2013 20:31:56 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 99303 invoked by uid 99); 19 Jun 2013 20:31:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Jun 2013 20:31:56 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of prash1784@gmail.com designates 209.85.160.52 as permitted sender) Received: from [209.85.160.52] (HELO mail-pb0-f52.google.com) (209.85.160.52) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Jun 2013 20:31:52 +0000 Received: by mail-pb0-f52.google.com with SMTP id xa12so5453073pbc.11 for ; Wed, 19 Jun 2013 13:31:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=RFfkTuPV/6aMsRe0gjp6agAlIbcYeTsoa+gnOYqTuTA=; b=E+/z6vabebnhtv8WaNwOK7NmXyemvVe96q5pQBU6o/M+VOO2rBgNK3+CNdvEEPzlEg olsvQLg4hjvNnVznNLSwXsS4ZtmTQVnu+SNi6rJLbEr4fp5/RVZN7qoBgwzcll3esD78 jj4ytgD7MkBBLty4PJfHcgWFG9Z76in8IfpRA0AKwwLBGwhzB2zBj1Ud1Gm+509seTKx 12CXJeUSrGA8EhgF0sMZ7sWBNvjK882pHSDpZ98aCL75K/ZiOYoAaET/TrZr7ssSJNKY qyxixT3CwRNk+fDeQeuhpeFVrS825JReEq1F8zvFiQadSsf9JtudQ9ndBU46wI8S+LeS GC9A== MIME-Version: 1.0 X-Received: by 10.68.247.101 with SMTP id yd5mr4474901pbc.57.1371673892133; Wed, 19 Jun 2013 13:31:32 -0700 (PDT) Received: by 10.70.52.6 with HTTP; Wed, 19 Jun 2013 13:31:32 -0700 (PDT) In-Reply-To: References: <1C40D33AEC9DCA40A06F4637B6E58ABF46ABBE32@LAX-EX-MB2.datadirect.datadirectnet.com> Date: Wed, 19 Jun 2013 13:31:32 -0700 Message-ID: Subject: Re: DFS Permissions on Hadoop 2.x From: Prashant Kommireddi To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=047d7b16053b16a34404df87b749 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b16053b16a34404df87b749 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable How can we resolve the issue in the case I have mentioned? File a MR Jira that does not try to check permissions when dfs.permissions.enabled is set to false? The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get around the fact that certain permissions are set on shared directories by a certain user that disallow any other users from using them. Or am I missing something entirely? On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth wr= ote: > Just in case anyone is curious who didn't look at HDFS-4918, we > established that this is actually expected behavior, and it's mentioned i= n > the documentation. However, I filed HDFS-4919 to make the information > clearer in the documentation, since this caused some confusion. > > https://issues.apache.org/jira/browse/HDFS-4919 > > Chris Nauroth > Hortonworks > http://hortonworks.com/ > > > > On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi > wrote: > >> Thanks guys, I will follow the discussion there. >> >> >> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu wrote: >> >>> Yes, and I think this was lead by Snapshot. >>> >>> I've file a JIRA here: >>> https://issues.apache.org/jira/browse/HDFS-4918 >>> >>> >>> >>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J wrote: >>> >>>> This is a HDFS bug. Like all other methods that check for permissions >>>> being enabled, the client call of setPermission should check it as >>>> well. It does not do that currently and I believe it should be a NOP >>>> in such a case. Please do file a JIRA (and reference the ID here to >>>> close the loop)! >>>> >>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi >>>> wrote: >>>> > Looks like the jobs fail only on the first attempt and pass >>>> thereafter. >>>> > Failure occurs while setting perms on "intermediate done directory". >>>> Here is >>>> > what I think is happening: >>>> > >>>> > 1. Intermediate done dir is (ideally) created as part of deployment >>>> (for eg, >>>> > /mapred/history/done_intermediate) >>>> > >>>> > 2. When a MR job is run, it creates a user dir within intermediate >>>> done dir >>>> > (/mapred/history/done_intermediate/username) >>>> > >>>> > 3. After this dir is created, the code tries to set permissions on >>>> this user >>>> > dir. In doing so, it checks for EXECUTE permissions on not just its >>>> parent >>>> > (/mapred/history/done_intermediate) but across all dirs to the >>>> top-most >>>> > level (/mapred). This fails as "/mapred" does not have execute >>>> permissions >>>> > for the "Other" users. >>>> > >>>> > 4. On successive job runs, since the user dir already exists >>>> > (/mapred/history/done_intermediate/username) it no longer tries to >>>> create >>>> > and set permissions again. And the job completes without any perm >>>> errors. >>>> > >>>> > This is the code within JobHistoryEventHandler that's doing it. >>>> > >>>> > //Check for the existence of intermediate done dir. >>>> > Path doneDirPath =3D null; >>>> > try { >>>> > doneDirPath =3D FileSystem.get(conf).makeQualified(new >>>> > Path(doneDirStr)); >>>> > doneDirFS =3D FileSystem.get(doneDirPath.toUri(), conf); >>>> > // This directory will be in a common location, or this may be= a >>>> > cluster >>>> > // meant for a single user. Creating based on the conf. Should >>>> ideally >>>> > be >>>> > // created by the JobHistoryServer or as part of deployment. >>>> > if (!doneDirFS.exists(doneDirPath)) { >>>> > if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) { >>>> > LOG.info("Creating intermediate history logDir: [" >>>> > + doneDirPath >>>> > + "] + based on conf. Should ideally be created by the >>>> > JobHistoryServer: " >>>> > + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR); >>>> > mkdir( >>>> > doneDirFS, >>>> > doneDirPath, >>>> > new FsPermission( >>>> > JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSION= S >>>> > .toShort())); >>>> > // TODO Temporary toShort till new >>>> FsPermission(FsPermissions) >>>> > // respects >>>> > // sticky >>>> > } else { >>>> > String message =3D "Not creating intermediate history logD= ir: >>>> [" >>>> > + doneDirPath >>>> > + "] based on conf: " >>>> > + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR >>>> > + ". Either set to true or pre-create this directory >>>> with" + >>>> > " appropriate permissions"; >>>> > LOG.error(message); >>>> > throw new YarnException(message); >>>> > } >>>> > } >>>> > } catch (IOException e) { >>>> > LOG.error("Failed checking for the existance of history >>>> intermediate " >>>> > + >>>> > "done directory: [" + doneDirPath + "]"); >>>> > throw new YarnException(e); >>>> > } >>>> > >>>> > >>>> > In any case, this does not appear to be the right behavior as it doe= s >>>> not >>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sound= s >>>> like a >>>> > bug? >>>> > >>>> > >>>> > Thanks, Prashant >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi < >>>> prash1784@gmail.com> >>>> > wrote: >>>> >> >>>> >> Hi Chris, >>>> >> >>>> >> This is while running a MR job. Please note the job is able to writ= e >>>> files >>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging >>>> in some >>>> >> more, it looks like the failure occurs after writing to >>>> >> "/mapred/history/done_intermediate". >>>> >> >>>> >> Here is a more detailed stacktrace. >>>> >> >>>> >> INFO: Job end notification started for jobID : job_1371593763906_00= 01 >>>> >> Jun 18, 2013 3:20:20 PM >>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler >>>> >> closeEventWriter >>>> >> INFO: Unable to write out JobSummaryInfo to >>>> >> >>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/j= ob_1371593763906_0001.summary_tmp] >>>> >> org.apache.hadoop.security.AccessControlException: Permission denie= d: >>>> >> user=3Dsmehta, access=3DEXECUTE, >>>> >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPer= missionChecker.java:205) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraver= se(FSPermissionChecker.java:161) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermis= sion(FSPermissionChecker.java:128) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FS= Namesystem.java:4684) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNames= ystem.java:4640) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(F= SNamesystem.java:1134) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNa= mesystem.java:1111) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission= (NameNodeRpcServer.java:454) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTran= slatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:25= 3) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cli= entNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:= 44074) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:453) >>>> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695= ) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691= ) >>>> >> at java.security.AccessController.doPrivileged(Native Method) >>>> >> at javax.security.auth.Subject.doAs(Subject.java:396) >>>> >> at >>>> >> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1408) >>>> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) >>>> >> >>>> >> at >>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) >>>> >> at >>>> >> >>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo= rAccessorImpl.java:39) >>>> >> at >>>> >> >>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo= nstructorAccessorImpl.java:27) >>>> >> at >>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcep= tion.java:90) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExce= ption.java:57) >>>> >> at >>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(Distributed= FileSystem.java:823) >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEve= ntWriter(JobHistoryEventHandler.java:666) >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEv= ent(JobHistoryEventHandler.java:521) >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(Jo= bHistoryEventHandler.java:273) >>>> >> at java.lang.Thread.run(Thread.java:662) >>>> >> Caused by: >>>> >> >>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.Acces= sControlException): >>>> >> Permission denied: user=3Dsmehta, access=3DEXECUTE, >>>> >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPer= missionChecker.java:205) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraver= se(FSPermissionChecker.java:161) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermis= sion(FSPermissionChecker.java:128) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FS= Namesystem.java:4684) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNames= ystem.java:4640) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(F= SNamesystem.java:1134) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNa= mesystem.java:1111) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission= (NameNodeRpcServer.java:454) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTran= slatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:25= 3) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cli= entNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:= 44074) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:453) >>>> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695= ) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691= ) >>>> >> at java.security.AccessController.doPrivileged(Native Method) >>>> >> at javax.security.auth.Subject.doAs(Subject.java:396) >>>> >> at >>>> >> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1408) >>>> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) >>>> >> >>>> >> at org.apache.hadoop.ipc.Client.call(Client.java:1225) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngi= ne.java:202) >>>> >> at $Proxy9.setPermission(Unknown Source) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.s= etPermission(ClientNamenodeProtocolTranslatorPB.java:241) >>>> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>> >> at >>>> >> >>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j= ava:39) >>>> >> at >>>> >> >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:25) >>>> >> at java.lang.reflect.Method.invoke(Method.java:597) >>>> >> at >>>> >> >>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn= vocationHandler.java:164) >>>> >> at >>>> >> >>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocati= onHandler.java:83) >>>> >> at $Proxy10.setPermission(Unknown Source) >>>> >> at >>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895) >>>> >> ... 5 more >>>> >> Jun 18, 2013 3:20:20 PM >>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtExcepti= on >>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception. >>>> >> org.apache.hadoop.yarn.YarnException: >>>> >> org.apache.hadoop.security.AccessControlException: Permission denie= d: >>>> >> user=3Dsmehta, access=3DEXECUTE, >>>> >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPer= missionChecker.java:205) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraver= se(FSPermissionChecker.java:161) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermis= sion(FSPermissionChecker.java:128) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FS= Namesystem.java:4684) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNames= ystem.java:4640) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(F= SNamesystem.java:1134) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNa= mesystem.java:1111) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission= (NameNodeRpcServer.java:454) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTran= slatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:25= 3) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cli= entNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:= 44074) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:453) >>>> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695= ) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691= ) >>>> >> at java.security.AccessController.doPrivileged(Native Method) >>>> >> at javax.security.auth.Subject.doAs(Subject.java:396) >>>> >> at >>>> >> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1408) >>>> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) >>>> >> >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEv= ent(JobHistoryEventHandler.java:523) >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(Jo= bHistoryEventHandler.java:273) >>>> >> at java.lang.Thread.run(Thread.java:662) >>>> >> Caused by: org.apache.hadoop.security.AccessControlException: >>>> Permission >>>> >> denied: user=3Dsmehta, access=3DEXECUTE, >>>> >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPer= missionChecker.java:205) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraver= se(FSPermissionChecker.java:161) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermis= sion(FSPermissionChecker.java:128) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FS= Namesystem.java:4684) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNames= ystem.java:4640) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(F= SNamesystem.java:1134) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNa= mesystem.java:1111) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission= (NameNodeRpcServer.java:454) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTran= slatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:25= 3) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cli= entNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:= 44074) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:453) >>>> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695= ) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691= ) >>>> >> at java.security.AccessController.doPrivileged(Native Method) >>>> >> at javax.security.auth.Subject.doAs(Subject.java:396) >>>> >> at >>>> >> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1408) >>>> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) >>>> >> >>>> >> at >>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) >>>> >> at >>>> >> >>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo= rAccessorImpl.java:39) >>>> >> at >>>> >> >>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo= nstructorAccessorImpl.java:27) >>>> >> at >>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExcep= tion.java:90) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExce= ption.java:57) >>>> >> at >>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(Distributed= FileSystem.java:823) >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEve= ntWriter(JobHistoryEventHandler.java:666) >>>> >> at >>>> >> >>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEv= ent(JobHistoryEventHandler.java:521) >>>> >> ... 2 more >>>> >> Caused by: >>>> >> >>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.Acces= sControlException): >>>> >> Permission denied: user=3Dsmehta, access=3DEXECUTE, >>>> >> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPer= missionChecker.java:205) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraver= se(FSPermissionChecker.java:161) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermis= sion(FSPermissionChecker.java:128) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FS= Namesystem.java:4684) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNames= ystem.java:4640) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(F= SNamesystem.java:1134) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNa= mesystem.java:1111) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission= (NameNodeRpcServer.java:454) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTran= slatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:25= 3) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cli= entNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:= 44074) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:453) >>>> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695= ) >>>> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691= ) >>>> >> at java.security.AccessController.doPrivileged(Native Method) >>>> >> at javax.security.auth.Subject.doAs(Subject.java:396) >>>> >> at >>>> >> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1408) >>>> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) >>>> >> >>>> >> at org.apache.hadoop.ipc.Client.call(Client.java:1225) >>>> >> at >>>> >> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngi= ne.java:202) >>>> >> at $Proxy9.setPermission(Unknown Source) >>>> >> at >>>> >> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.s= etPermission(ClientNamenodeProtocolTranslatorPB.java:241) >>>> >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>> >> at >>>> >> >>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j= ava:39) >>>> >> at >>>> >> >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess= orImpl.java:25) >>>> >> at java.lang.reflect.Method.invoke(Method.java:597) >>>> >> at >>>> >> >>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn= vocationHandler.java:164) >>>> >> at >>>> >> >>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocati= onHandler.java:83) >>>> >> at $Proxy10.setPermission(Unknown Source) >>>> >> at >>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895) >>>> >> ... 5 more >>>> >> Jun 18, 2013 3:20:20 PM >>>> >> >>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleSta= ts log >>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 >>>> ScheduledReds:0 >>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 >>>> ContAlloc:2 >>>> >> ContRel:0 HostLocal:0 RackLocal:1 >>>> >> Jun 18, 2013 3:20:21 PM >>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator >>>> getResources >>>> >> INFO: Received completed container >>>> container_1371593763906_0001_01_000003 >>>> >> Jun 18, 2013 3:20:21 PM >>>> >> >>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleSta= ts log >>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds= :0 >>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 >>>> ContAlloc:2 >>>> >> ContRel:0 HostLocal:0 RackLocal:1 >>>> >> Jun 18, 2013 3:20:21 PM >>>> >> >>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$Diagnostic= InformationUpdater >>>> >> transition >>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0= : >>>> >> Container killed by the ApplicationMaster. >>>> >> >>>> >> >>>> >> >>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth < >>>> cnauroth@hortonworks.com> >>>> >> wrote: >>>> >>> >>>> >>> Prashant, can you provide more details about what you're doing whe= n >>>> you >>>> >>> see this error? Are you submitting a MapReduce job, running an >>>> HDFS shell >>>> >>> command, or doing some other action? It's possible that we're als= o >>>> seeing >>>> >>> an interaction with some other change in 2.x that triggers a >>>> setPermission >>>> >>> call that wasn't there in 0.20.2. I think the problem with the HD= FS >>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the >>>> code in >>>> >>> 0.20.2 never triggered a setPermission call for your usage, then y= ou >>>> >>> wouldn't have seen the problem. >>>> >>> >>>> >>> I'd like to gather these details for submitting a new bug report t= o >>>> HDFS. >>>> >>> Thanks! >>>> >>> >>>> >>> Chris Nauroth >>>> >>> Hortonworks >>>> >>> http://hortonworks.com/ >>>> >>> >>>> >>> >>>> >>> >>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung wrote= : >>>> >>>> >>>> >>>> I believe, the properties name should be =93dfs.permissions=94 >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com] >>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM >>>> >>>> To: user@hadoop.apache.org >>>> >>>> Subject: DFS Permissions on Hadoop 2.x >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Hello, >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had= a >>>> >>>> question around disabling dfs permissions on the latter version. >>>> For some >>>> >>>> reason, setting the following config does not seem to work >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> dfs.permissions.enabled >>>> >>>> >>>> >>>> false >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Any other configs that might be needed for this? >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> Here is the stacktrace. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> 2013-06-17 17:35:45,429 INFO ipc.Server - IPC Server handler 62 = on >>>> >>>> 8020, call >>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from >>>> >>>> 10.0.53.131:24059: error: >>>> org.apache.hadoop.security.AccessControlException: >>>> >>>> Permission denied: user=3Dsmehta, access=3DEXECUTE, >>>> >>>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >>>> >>>> >>>> org.apache.hadoop.security.AccessControlException: Permission >>>> denied: >>>> >>>> user=3Dsmehta, access=3DEXECUTE, >>>> >>>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx--- >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPer= missionChecker.java:205) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraver= se(FSPermissionChecker.java:161) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermis= sion(FSPermissionChecker.java:128) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FS= Namesystem.java:4684) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNames= ystem.java:4640) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(F= SNamesystem.java:1134) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNa= mesystem.java:1111) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission= (NameNodeRpcServer.java:454) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTran= slatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:25= 3) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cli= entNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:= 44074) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call= (ProtobufRpcEngine.java:453) >>>> >>>> >>>> >>>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) >>>> >>>> >>>> >>>> at >>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695) >>>> >>>> >>>> >>>> at >>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691) >>>> >>>> >>>> >>>> at java.security.AccessController.doPrivileged(Native >>>> Method) >>>> >>>> >>>> >>>> at javax.security.auth.Subject.doAs(Subject.java:396) >>>> >>>> >>>> >>>> at >>>> >>>> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformat= ion.java:1408) >>>> >>>> >>>> >>>> at >>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689) >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >>>> >>> >>>> >> >>>> > >>>> >>>> >>>> >>>> -- >>>> Harsh J >>>> >>> >>> >> > --047d7b16053b16a34404df87b749 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
How can we resolve the issue in the case I have mentioned?= File a MR Jira that does not try to check permissions when dfs.permissions= .enabled is set to false?=A0

The explanation that=A0Tsz = Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (than= ks for that). But I am still unsure how we can get around the fact that cer= tain permissions are set on shared directories by a certain user that disal= low any other users from using them. Or am I missing something entirely?


On Wed,= Jun 19, 2013 at 1:01 PM, Chris Nauroth <cnauroth@hortonworks.com> wrote:
Just in case anyone is curi= ous who didn't look at HDFS-4918, we established that this is actually = expected behavior, and it's mentioned in the documentation. =A0However,= I filed HDFS-4919 to make the information clearer in the documentation, si= nce this caused some confusion.


<= div dir=3D"ltr">
Chris Nauroth
Hortonworks



On Tue, Jun= 18, 2013 at 10:42 PM, Prashant Kommireddi <prash1784@gmail.com><= /span> wrote:
Thanks guys, I will follow the discussion there.
=


On = Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <azuryyyu@gmail.com> wrote:
Yes, and I think this = was lead by Snapshot.

I've file a JIRA here:
https:/= /issues.apache.org/jira/browse/HDFS-4918



On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <harsh@cloudera.com>= wrote:
This is a HDFS bug. Like all other methods t= hat check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<prash1784@gmai= l.com> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter= .
> Failure occurs while setting perms on "intermediate done director= y". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (f= or eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate don= e dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on thi= s user
> dir. In doing so, it checks for EXECUTE permissions on not just its pa= rent
> (/mapred/history/done_intermediate) but across all dirs to the top-mos= t
> level (/mapred). This fails as "/mapred" does not have execu= te permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to cre= ate
> and set permissions again. And the job completes without any perm erro= rs.
>
> This is the code within JobHistoryEventHandler that's doing it. >
> =A0//Check for the existence of intermediate done dir.
> =A0 =A0 Path doneDirPath =3D null;
> =A0 =A0 try {
> =A0 =A0 =A0 doneDirPath =3D FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
> =A0 =A0 =A0 doneDirFS =3D FileSystem.get(doneDirPath.toUri(), conf); > =A0 =A0 =A0 // This directory will be in a common location, or this ma= y be a
> cluster
> =A0 =A0 =A0 // meant for a single user. Creating based on the conf. Sh= ould ideally
> be
> =A0 =A0 =A0 // created by the JobHistoryServer or as part of deploymen= t.
> =A0 =A0 =A0 if (!doneDirFS.exists(doneDirPath)) {
> =A0 =A0 =A0 if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {<= br> > =A0 =A0 =A0 =A0 LOG.info("Creating intermediate history logDir: [= "
> =A0 =A0 =A0 =A0 =A0 =A0 + doneDirPath
> =A0 =A0 =A0 =A0 =A0 =A0 + "] + based on conf. Should ideally be c= reated by the
> JobHistoryServer: "
> =A0 =A0 =A0 =A0 =A0 =A0 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BAS= E_DIR);
> =A0 =A0 =A0 =A0 =A0 mkdir(
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 doneDirFS,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 doneDirPath,
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 new FsPermission(
> =A0 =A0 =A0 =A0 =A0 =A0 JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_= PERMISSIONS
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 .toShort()));
> =A0 =A0 =A0 =A0 =A0 // TODO Temporary toShort till new FsPermission(Fs= Permissions)
> =A0 =A0 =A0 =A0 =A0 // respects
> =A0 =A0 =A0 =A0 // sticky
> =A0 =A0 =A0 } else {
> =A0 =A0 =A0 =A0 =A0 String message =3D "Not creating intermediate= history logDir: ["
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 + doneDirPath
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 + "] based on conf: "
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 + MRJobConfig.MR_AM_CREATE_JH_INTERMED= IATE_BASE_DIR
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 + ". Either set to true or pre-cr= eate this directory with" +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 " appropriate permissions";<= br> > =A0 =A0 =A0 =A0 LOG.error(message);
> =A0 =A0 =A0 =A0 throw new YarnException(message);
> =A0 =A0 =A0 }
> =A0 =A0 =A0 }
> =A0 =A0 } catch (IOException e) {
> =A0 =A0 =A0 LOG.error("Failed checking for the existance of histo= ry intermediate "
> +
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 "done directory: [&qu= ot; + doneDirPath + "]");
> =A0 =A0 =A0 throw new YarnException(e);
> =A0 =A0 }
>
>
> In any case, this does not appear to be the right behavior as it does = not
> respect "dfs.permissions.enabled" (set to false) at any poin= t. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <prash1784@gmail.com>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to wri= te files
>> to "/mapred" directory and fails on EXECUTE permissions.= On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0= 001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smeh= ta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission deni= ed:
>> user=3Dsmehta, access=3DEXECUTE,
>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(F= SPermissionChecker.java:205)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTr= averse(FSPermissionChecker.java:161)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPe= rmission(FSPermissionChecker.java:128)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermissio= n(FSNamesystem.java:4684)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSN= amesystem.java:4640)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionI= nt(FSNamesystem.java:1134)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(= FSNamesystem.java:1111)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermis= sion(NameNodeRpcServer.java:454)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSide= TranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.jav= a:253)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos= $ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.j= ava:44074)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.= call(ProtobufRpcEngine.java:453)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)=
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1695)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1691)
>> =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native M= ethod)
>> =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) >> =A0 =A0 =A0at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1408)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java= :1689)
>>
>> =A0 =A0 =A0at sun.reflect.NativeConstructorAccessorImpl.newInstanc= e0(Native Method)
>> =A0 =A0 =A0at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstr= uctorAccessorImpl.java:39)
>> =A0 =A0 =A0at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delegati= ngConstructorAccessorImpl.java:27)
>> =A0 =A0 =A0at java.lang.reflect.Constructor.newInstance(Constructo= r.java:513)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteE= xception.java:90)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(Remote= Exception.java:57)
>> =A0 =A0 =A0at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSCl= ient.java:1897)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(Distrib= utedFileSystem.java:823)
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.clos= eEventWriter(JobHistoryEventHandler.java:666)
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.hand= leEvent(JobHistoryEventHandler.java:521)
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.ru= n(JobHistoryEventHandler.java:273)
>> =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.A= ccessControlException):
>> Permission denied: user=3Dsmehta, access=3DEXECUTE,
>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(F= SPermissionChecker.java:205)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTr= averse(FSPermissionChecker.java:161)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPe= rmission(FSPermissionChecker.java:128)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermissio= n(FSNamesystem.java:4684)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSN= amesystem.java:4640)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionI= nt(FSNamesystem.java:1134)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(= FSNamesystem.java:1111)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermis= sion(NameNodeRpcServer.java:454)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSide= TranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.jav= a:253)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos= $ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.j= ava:44074)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.= call(ProtobufRpcEngine.java:453)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)=
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1695)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1691)
>> =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native M= ethod)
>> =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) >> =A0 =A0 =A0at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1408)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java= :1689)
>>
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Client.call(Client.java:1225)<= br> >> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpc= Engine.java:202)
>> =A0 =A0 =A0at $Proxy9.setPermission(Unknown Source)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslator= PB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> =A0 =A0 =A0at sun.reflect.NativeMethodAccessorImpl.invoke0(Native = Method)
>> =A0 =A0 =A0at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorIm= pl.java:39)
>> =A0 =A0 =A0at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAc= cessorImpl.java:25)
>> =A0 =A0 =A0at java.lang.reflect.Method.invoke(Method.java:597)
>> =A0 =A0 =A0at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Ret= ryInvocationHandler.java:164)
>> =A0 =A0 =A0at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvo= cationHandler.java:83)
>> =A0 =A0 =A0at $Proxy10.setPermission(Unknown Source)
>> =A0 =A0 =A0at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSCl= ient.java:1895)
>> =A0 =A0 =A0... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtExcept= ion
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission deni= ed:
>> user=3Dsmehta, access=3DEXECUTE,
>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(F= SPermissionChecker.java:205)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTr= averse(FSPermissionChecker.java:161)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPe= rmission(FSPermissionChecker.java:128)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermissio= n(FSNamesystem.java:4684)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSN= amesystem.java:4640)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionI= nt(FSNamesystem.java:1134)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(= FSNamesystem.java:1111)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermis= sion(NameNodeRpcServer.java:454)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSide= TranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.jav= a:253)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos= $ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.j= ava:44074)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.= call(ProtobufRpcEngine.java:453)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)=
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1695)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1691)
>> =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native M= ethod)
>> =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) >> =A0 =A0 =A0at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1408)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java= :1689)
>>
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.hand= leEvent(JobHistoryEventHandler.java:523)
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.ru= n(JobHistoryEventHandler.java:273)
>> =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Perm= ission
>> denied: user=3Dsmehta, access=3DEXECUTE,
>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(F= SPermissionChecker.java:205)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTr= averse(FSPermissionChecker.java:161)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPe= rmission(FSPermissionChecker.java:128)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermissio= n(FSNamesystem.java:4684)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSN= amesystem.java:4640)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionI= nt(FSNamesystem.java:1134)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(= FSNamesystem.java:1111)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermis= sion(NameNodeRpcServer.java:454)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSide= TranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.jav= a:253)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos= $ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.j= ava:44074)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.= call(ProtobufRpcEngine.java:453)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)=
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1695)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1691)
>> =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native M= ethod)
>> =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) >> =A0 =A0 =A0at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1408)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java= :1689)
>>
>> =A0 =A0 =A0at sun.reflect.NativeConstructorAccessorImpl.newInstanc= e0(Native Method)
>> =A0 =A0 =A0at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstr= uctorAccessorImpl.java:39)
>> =A0 =A0 =A0at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delegati= ngConstructorAccessorImpl.java:27)
>> =A0 =A0 =A0at java.lang.reflect.Constructor.newInstance(Constructo= r.java:513)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteE= xception.java:90)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(Remote= Exception.java:57)
>> =A0 =A0 =A0at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSCl= ient.java:1897)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(Distrib= utedFileSystem.java:823)
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.clos= eEventWriter(JobHistoryEventHandler.java:666)
>> =A0 =A0 =A0at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.hand= leEvent(JobHistoryEventHandler.java:521)
>> =A0 =A0 =A0... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.A= ccessControlException):
>> Permission denied: user=3Dsmehta, access=3DEXECUTE,
>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx---
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(F= SPermissionChecker.java:205)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTr= averse(FSPermissionChecker.java:161)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPe= rmission(FSPermissionChecker.java:128)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermissio= n(FSNamesystem.java:4684)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSN= amesystem.java:4640)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionI= nt(FSNamesystem.java:1134)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(= FSNamesystem.java:1111)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermis= sion(NameNodeRpcServer.java:454)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSide= TranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.jav= a:253)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos= $ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.j= ava:44074)
>> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.= call(ProtobufRpcEngine.java:453)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)=
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1695)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.ja= va:1691)
>> =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native M= ethod)
>> =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) >> =A0 =A0 =A0at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInfo= rmation.java:1408)
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java= :1689)
>>
>> =A0 =A0 =A0at org.apache.hadoop.ipc.Client.call(Client.java:1225)<= br> >> =A0 =A0 =A0at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpc= Engine.java:202)
>> =A0 =A0 =A0at $Proxy9.setPermission(Unknown Source)
>> =A0 =A0 =A0at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslator= PB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> =A0 =A0 =A0at sun.reflect.NativeMethodAccessorImpl.invoke0(Native = Method)
>> =A0 =A0 =A0at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorIm= pl.java:39)
>> =A0 =A0 =A0at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAc= cessorImpl.java:25)
>> =A0 =A0 =A0at java.lang.reflect.Method.invoke(Method.java:597)
>> =A0 =A0 =A0at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Ret= ryInvocationHandler.java:164)
>> =A0 =A0 =A0at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvo= cationHandler.java:83)
>> =A0 =A0 =A0at $Proxy10.setPermission(Unknown Source)
>> =A0 =A0 =A0at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSCl= ient.java:1895)
>> =A0 =A0 =A0... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$Schedul= eStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledRe= ds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 Cont= Alloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getReso= urces
>> INFO: Received completed container container_1371593763906_0001_01= _000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$Schedul= eStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledRed= s:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 Cont= Alloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$Diagno= sticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_= 0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cnauroth@hortonworks.com= >
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're d= oing when you
>>> see this error? =A0Are you submitting a MapReduce job, running= an HDFS shell
>>> command, or doing some other action? =A0It's possible that= we're also seeing
>>> an interaction with some other change in 2.x that triggers a s= etPermission
>>> call that wasn't there in 0.20.2. =A0I think the problem w= ith the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if th= e code in
>>> 0.20.2 never triggered a setPermission call for your usage, th= en you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug = report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://h= ortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <lleung@ddn.com> wrote:
>>>>
>>>> I believe, the properties name should be =93dfs.permission= s=94
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) = and had a
>>>> question around disabling dfs permissions on the latter ve= rsion. For some
>>>> reason, setting the following config does not seem to work=
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>> =A0 =A0 =A0 =A0 <name>dfs.permissions.enabled</na= me>
>>>>
>>>> =A0 =A0 =A0 =A0 <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO =A0ipc.Server - IPC Server ha= ndler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.= setPermission from
>>>> 10.= 0.53.131:24059: error: org.apache.hadoop.security.AccessControlExceptio= n:
>>>> Permission denied: user=3Dsmehta, access=3DEXECUTE,
>>>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx= ---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permiss= ion denied:
>>>> user=3Dsmehta, access=3DEXECUTE,
>>>> inode=3D"/mapred":pkommireddi:supergroup:drwxrwx= ---
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker= .check(FSPermissionChecker.java:205)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker= .checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker= .checkPermission(FSPermissionChecker.java:128)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkP= ermission(FSNamesystem.java:4684)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkO= wner(FSNamesystem.java:4640)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPer= missionInt(FSNamesystem.java:1134)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPer= mission(FSNamesystem.java:1111)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.s= etPermission(NameNodeRpcServer.java:454)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolSe= rverSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslat= orPB.java:253)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtoc= olProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocol= Protos.java:44074)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpc= Invoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>> =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.RPC$Server.call(R= PC.java:1002)
>>>>
>>>> =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler$1.= run(Server.java:1695)
>>>>
>>>> =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler$1.= run(Server.java:1691)
>>>>
>>>> =A0 =A0 =A0 =A0 at java.security.AccessController.doPrivil= eged(Native Method)
>>>>
>>>> =A0 =A0 =A0 =A0 at javax.security.auth.Subject.doAs(Subjec= t.java:396)
>>>>
>>>> =A0 =A0 =A0 =A0 at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserG= roupInformation.java:1408)
>>>>
>>>> =A0 =A0 =A0 =A0 at org.apache.hadoop.ipc.Server$Handler.ru= n(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>



--
Harsh J




--047d7b16053b16a34404df87b749--