Return-Path: X-Original-To: apmail-bigtop-user-archive@www.apache.org Delivered-To: apmail-bigtop-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9D24B11E3A for ; Mon, 28 Jul 2014 17:12:15 +0000 (UTC) Received: (qmail 39329 invoked by uid 500); 28 Jul 2014 17:12:14 -0000 Delivered-To: apmail-bigtop-user-archive@bigtop.apache.org Received: (qmail 39274 invoked by uid 500); 28 Jul 2014 17:12:14 -0000 Mailing-List: contact user-help@bigtop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@bigtop.apache.org Delivered-To: mailing list user@bigtop.apache.org Received: (qmail 39265 invoked by uid 99); 28 Jul 2014 17:12:14 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Jul 2014 17:12:14 +0000 X-ASF-Spam-Status: No, hits=0.7 required=5.0 tests=RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [76.96.27.243] (HELO qmta13.emeryville.ca.mail.comcast.net) (76.96.27.243) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Jul 2014 17:12:07 +0000 Received: from omta05.emeryville.ca.mail.comcast.net ([76.96.30.43]) by qmta13.emeryville.ca.mail.comcast.net with comcast id Xrxh1o0080vp7WLADtBkz6; Mon, 28 Jul 2014 17:11:44 +0000 Received: from fs ([24.130.135.131]) by omta05.emeryville.ca.mail.comcast.net with comcast id XtBj1o00a2qGB608RtBknh; Mon, 28 Jul 2014 17:11:44 +0000 Received: from mail.boudnik.org (localhost [127.0.0.1]) by fs (8.14.3/8.14.3/Debian-5+lenny1) with ESMTP id s6SHBhtU032000 for ; Mon, 28 Jul 2014 10:11:43 -0700 Received: (from cos@localhost) by mail.boudnik.org (8.14.3/8.14.3/Submit) id s6SHBglU031999 for user@bigtop.apache.org; Mon, 28 Jul 2014 10:11:42 -0700 X-Authentication-Warning: mail.boudnik.org: cos set sender to cos@apache.org using -f Date: Mon, 28 Jul 2014 10:11:42 -0700 From: Konstantin Boudnik To: user@bigtop.apache.org Subject: Re: Hadoop Single-Node Problems Message-ID: <20140728171142.GX24443@boudnik.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Organization: It's something of 'Cos User-Agent: Mutt/1.5.18 (2008-05-17) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=comcast.net; s=q20140121; t=1406567504; bh=wQ6683Hd21LOpZ1ZGTjhjDN8PAZQ1qdEBA042WsZapM=; h=Received:Received:Received:Received:Date:From:To:Subject: Message-ID:MIME-Version:Content-Type; b=Jz9IeQxojjLq0Gpfo+3C0GtxNIyOrMRw5YwOaNlPwJIZfKPCtbP9VODP2eO7+Wqlh 2MnKsfDvEAZHEahGGtQr73GLkSEIxWeY/XyvReqzjEzsJ/zZBQwgh3rmROh4E7VLWZ c5Hhna8vQYYg8wCZa2KvyK1BDdv+ZwLAWz5DLrL69dbw00/FB1R12gyDYPz8jFDPCJ MmB9qeFGCkAugitxr6IhLdAWKDQSzKloS89+5LmH4OLp4Kf1uTaPB5sAvtAB7X/LqO zHBwEQc9So7CPlvz45ijBJjokokPFL3dPwi5sXiQm1+OJO5luEsd5tQzj93Nylm+Js 2w1grvKWXI/Sw== X-Virus-Checked: Checked by ClamAV on apache.org IIRC you were installing the cluster manually, as in no-Puppet. As the result you don't have HDFS structured properly. The result - you have no /user/root created (and that's what will be used when you do 'sudo hadoop something'). You can fix it easily by locating and running init-hdfs.sh script under hadoop installation directory. Cos On Mon, Jul 28, 2014 at 09:16AM, David Fryer wrote: > Hi Bigtop, > > When I try to run an example job on a single-node, I get the following > exceptions: > [hadoopuser@master bin]$ sudo hadoop jar > /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 1000 > Number of Maps = 10 > Samples per Map = 1000 > 14/07/28 07:45:33 WARN mapred.JobConf: The variable mapred.child.ulimit is > no longer used. > org.apache.hadoop.security.AccessControlException: Permission denied: > user=root, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:221) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:201) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:146) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4546) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4516) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2936) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2900) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2882) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:659) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:462) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40760) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57) > at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2144) > at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2113) > at > org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:540) > at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881) > at > org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:282) > at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72) > at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144) > at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.main(RunJar.java:212) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=root, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:221) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:201) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:146) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4546) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4516) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2936) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2900) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2882) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:659) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:462) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40760) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.mkdirs(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy9.mkdirs(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:446) > at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2142) > ... 19 more > > Also, I get an exception when I try to initialize hdfs: > [hadoopuser@master bin]$ sudo /usr/lib/hadoop/libexec/init-hdfs.sh > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /tmp' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 1777 /tmp' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /var' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /var/log' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 1775 /var/log' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown yarn:mapred /var/log' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /tmp/hadoop-yarn' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown -R mapred:mapred > /tmp/hadoop-yarn' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 > /tmp/hadoop-yarn' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir -p > /var/log/hadoop-yarn/apps' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 1777 > /var/log/hadoop-yarn/apps' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown yarn:mapred > /var/log/hadoop-yarn/apps' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /hbase' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown hbase:hbase /hbase' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /solr' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown solr:solr /solr' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /benchmarks' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /benchmarks' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod 755 /user' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown hdfs /user' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/history' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown mapred:mapred > /user/history' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod 755 /user/history' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/jenkins' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /user/jenkins' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown jenkins /user/jenkins' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/hive' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /user/hive' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown hive /user/hive' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/root' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /user/root' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown root /user/root' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/hue' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /user/hue' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown hue /user/hue' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/sqoop' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /user/sqoop' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown sqoop /user/sqoop' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/oozie' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chmod -R 777 /user/oozie' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -chown -R oozie /user/oozie' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/oozie/share' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir /user/oozie/share/lib' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir > /user/oozie/share/lib/hive' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir > /user/oozie/share/lib/mapreduce-streaming' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir > /user/oozie/share/lib/distcp' > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -mkdir > /user/oozie/share/lib/pig' > + ls '/usr/lib/hive/lib/*.jar' > + ls /usr/lib/hadoop-mapreduce/hadoop-streaming-2.0.6-alpha.jar > /usr/lib/hadoop-mapreduce/hadoop-streaming.jar > + su -s /bin/bash hdfs -c '/usr/bin/hadoop fs -put > /usr/lib/hadoop-mapreduce/hadoop-streaming*.jar > /user/oozie/share/lib/mapreduce-streaming' > 14/07/28 07:58:35 WARN hdfs.DFSClient: DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-2.0.6-alpha.jar._COPYING_ > could only be replicated to 0 nodes instead of minReplication (=1). There > are 0 datanode(s) running and no node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2155) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) > put: File > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-2.0.6-alpha.jar._COPYING_ > could only be replicated to 0 nodes instead of minReplication (=1). There > are 0 datanode(s) running and no node(s) are excluded in this operation. > 14/07/28 07:58:35 WARN hdfs.DFSClient: DataStreamer Exception > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming.jar._COPYING_ > could only be replicated to 0 nodes instead of minReplication (=1). There > are 0 datanode(s) running and no node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2155) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) > put: File > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming.jar._COPYING_ > could only be replicated to 0 nodes instead of minReplication (=1). There > are 0 datanode(s) running and no node(s) are excluded in this operation. > 14/07/28 07:58:35 ERROR hdfs.DFSClient: Failed to close file > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-2.0.6-alpha.jar._COPYING_ > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming-2.0.6-alpha.jar._COPYING_ > could only be replicated to 0 nodes instead of minReplication (=1). There > are 0 datanode(s) running and no node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2155) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) > 14/07/28 07:58:35 ERROR hdfs.DFSClient: Failed to close file > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming.jar._COPYING_ > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File > /user/oozie/share/lib/mapreduce-streaming/hadoop-streaming.jar._COPYING_ > could only be replicated to 0 nodes instead of minReplication (=1). There > are 0 datanode(s) running and no node(s) are excluded in this operation. > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2155) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:491) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:351) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:40744) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1478) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) > > at org.apache.hadoop.ipc.Client.call(Client.java:1240) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83) > at com.sun.proxy.$Proxy9.addBlock(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:311) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1156) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1009) > at > org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464) > > Can anyone help resolve these issues? > > Thank You, > David Fryer