hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Krishna Kishore Bonagiri <write2kish...@gmail.com>
Subject Re: Permission related errors when running with a different user
Date Fri, 07 Dec 2012 10:47:23 GMT
Hi Harsh,

 Thanks for the quick reply. I have tried this but it didn't work exactly.
Upon your suggestion I re-looked into the error and thought the following
might work

hadoop fs -chmod 777 /

I tried it and it worked.

  I went past that error. I now have a different error. I am seeing this
error in the Resource Manager's logs. Can you please throw me some clue on
this too..

2012-12-07 05:34:22,421 INFO  fifo.FifoScheduler
(FifoScheduler.java:containerCompleted(721)) - Application
appattempt_1353856203101_0369_000001 released container
container_1353856203101_0369_01_000003 on node: host: isredeng:51271
#containers=1 available=8064 used=128 with event: FINISHED
2012-12-07 05:34:24,401 WARN  attempt.RMAppAttemptImpl
(RMAppAttemptImpl.java:generateProxyUriWithoutScheme(379)) - Could not
proxify
java.net.URISyntaxException: Expected authority at index 7: http://
        at java.net.URI$Parser.fail(URI.java:2820)
        at java.net.URI$Parser.failExpecting(URI.java:2826)
        at java.net.URI$Parser.parseHierarchical(URI.java:3065)
        at java.net.URI$Parser.parse(URI.java:3025)
        at java.net.URI.<init>(URI.java:589)
        at
org.apache.hadoop.yarn.server.webproxy.ProxyUriUtils.getUriFromAMUrl(ProxyUriUtils.java:143)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.generateProxyUriWithoutScheme(RMAppAttemptImpl.java:371)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.access$2500(RMAppAttemptImpl.java:81)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMUnregisteredTransition.transition(RMAppAttemptImpl.java:849)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMUnregisteredTransition.transition(RMAppAttemptImpl.java:835)
        at
org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:357)
        at
org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:298)
        at
org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:43)
        at
org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:443)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:476)
        at
org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:80)
        at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:414)
        at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:395)
        at
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:125)
        at
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:74)
        at java.lang.Thread.run(Thread.java:736)


Thanks,
Kishore

On Thu, Dec 6, 2012 at 8:58 PM, Harsh J <harsh@cloudera.com> wrote:

> You are attempting a job submit operation as user "root" over HDFS and
> MR. Running a job involves placing the requisite files on HDFS, so MR
> can leverage its distributed presence to run task work.
>
> The files are usually placed under a user's HDFS home directory, which
> is of the form /user/[NAME]. By default, HDFS has no notions of a user
> existing in it (Imagine a linux user account with no home-dir yet). So
> you'll first have to provision the user with a home directory he can
> own for himself, as the HDFS administrator:
>
> Create and grant ownership to the user root, his own homedir:
>
> sudo -u hdfs hadoop fs -mkdir -p /user/root/
> sudo -u hdfs hadoop fs -chown root:root /user/root
>
> Once this is done, you can try to resubmit the job and the
> AccessControlException should be resolved.
>
> On Thu, Dec 6, 2012 at 8:28 PM, Krishna Kishore Bonagiri
> <write2kishore@gmail.com> wrote:
> > Hi,
> >   I am running a job with a different user than the one Hadoop is
> installed
> > with, and getting the following error. Please help resolve it. This is
> > actually an YARN job that I am trying to run.
> >
> > 2012-12-06 09:29:13,997 INFO  Client
> (Client.java:prepareJarResource(293)) -
> > Copy App Master jar from local filesystem and add to local environment
> > 2012-12-06 09:29:14,476 FATAL Client (Client.java:main(148)) - Error
> running
> > CLient
> > org.apache.hadoop.security.AccessControlException: Permission denied:
> > user=root, access=WRITE, inode="/":kbonagir:supergroup:drwxr-xr-x
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4203)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4174)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1574)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1509)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:410)
> >         at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:200)
> >         at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42590)
> >         at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
> >         at
> > java.security.AccessController.doPrivileged(AccessController.java:284)
> >         at javax.security.auth.Subject.doAs(Subject.java:573)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> >
> >         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > Method)
> >         at
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:56)
> >         at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:39)
> >         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:527)
> >         at
> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >         at
> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >         at
> > org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1250)
> >         at
> >
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1266)
> >         at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1090)
> >         at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1048)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:232)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:75)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:804)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:785)
> >         at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:684)
> >         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:259)
> >         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:232)
> >         at
> > org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1817)
> >         at Client.prepareJarResource(Client.java:299)
> >         at Client.launchAndMonitorAM(Client.java:509)
> >         at Client.main(Client.java:146)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> >         at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> >         at java.lang.reflect.Method.invoke(Method.java:611)
> >         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
> > Caused by: org.apache.hadoop.security.AccessControlException: Permission
> > denied: user=root, access=WRITE, inode="/":kbonagir:supergroup:drwxr-xr-x
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4203)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4174)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1574)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1509)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:410)
> >         at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:200)
> >         at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42590)
> >         at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
> >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
> >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
> >         at
> > java.security.AccessController.doPrivileged(AccessController.java:284)
> >
> >
> > Thanks,
> > Kishore
> >
> >
>
>
>
> --
> Harsh J
>

Mime
View raw message