Return-Path: X-Original-To: apmail-hadoop-common-dev-archive@www.apache.org Delivered-To: apmail-hadoop-common-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 114F89E61 for ; Wed, 8 Feb 2012 04:14:18 +0000 (UTC) Received: (qmail 19557 invoked by uid 500); 8 Feb 2012 04:14:13 -0000 Delivered-To: apmail-hadoop-common-dev-archive@hadoop.apache.org Received: (qmail 18816 invoked by uid 500); 8 Feb 2012 04:14:00 -0000 Mailing-List: contact common-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-dev@hadoop.apache.org Received: (qmail 18725 invoked by uid 99); 8 Feb 2012 04:13:57 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Feb 2012 04:13:57 +0000 X-ASF-Spam-Status: No, hits=0.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLYTO_END_DIGIT,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [98.138.90.58] (HELO nm11-vm0.bullet.mail.ne1.yahoo.com) (98.138.90.58) by apache.org (qpsmtpd/0.29) with SMTP; Wed, 08 Feb 2012 04:13:51 +0000 Received: from [98.138.90.49] by nm11.bullet.mail.ne1.yahoo.com with NNFMP; 08 Feb 2012 04:13:30 -0000 Received: from [98.138.87.9] by tm2.bullet.mail.ne1.yahoo.com with NNFMP; 08 Feb 2012 04:13:30 -0000 Received: from [127.0.0.1] by omp1009.mail.ne1.yahoo.com with NNFMP; 08 Feb 2012 04:13:30 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 392551.139.bm@omp1009.mail.ne1.yahoo.com Received: (qmail 22298 invoked by uid 60001); 8 Feb 2012 04:13:30 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1328674410; bh=ZhZRUXkn+2fwY61hdkyTMpki/qKjlMtltPyg2D1+/cA=; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=mwP/+fkoDrFbWEAcROObVfrjcTjRU+Ykw0x/dCRb25/aKAIm2XdpSP5MwM8fQ10lnS8CZ8DW2fqOH2fXNUd+9mmFOOrD1tYnJQZRJOQRqL2Ijoi/oClYtqv0iCNvO064+B/deMrLRxSgeLAzRyKdnvyaKkQKSu8Ez8U2d7G9Y4k= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=KBqQvyJHx4HrcjxnBodt2bvUnGjn+89Xj97Ni6Zf5rqlGNBzTf8MXrRMxpNX8ekzExDT2Kd/vRoTJFk9cDYDO/CjsTkLYjs22EONS6bzET6Kw4TSg68tMxfLt68DDkM6rPXnx1Zwa3SvV0Yix0H1zcaXU+LXNo3OSlVyK3MKzNM=; X-YMail-OSG: vinIFT0VM1nFrjTlxrIZsTiAqKjy_NpsivGygu2AC5XC4cV KqnSWkc8pYI9rLbOP4XMD7zsqEoAy8.O_QpS7ijvJ6qt8WW8f4gcggTAIgHs Deu_0cCeM8c13_wZ776FQKhbi5JAHkvIPuMKkbXNOjQantDjK5D1EChtI4_k 4jdnRgE_oogfqvyHlYaRANojbrXq3h8AoZDWDp4Py_7EVYy.rWh9jqS1s1TR Qssr7sHZzGJNZHwy8o34pVpkPJpxQKcmI6Q9jTvk2wNhfEm9.J8bHZZCQLOX Zu9DX9pNNp9hjemgItLeOoW0D7PTeKj4RrK1updxQc5nxfi4buHF8bzfQqGt 3bY._Eduy1JlbaF0tVUUJInn7xJ.ftIuUQuQIJj.H.12orh0LJqo7iQi83UC bLoTZgkqAUV0dRuVUv0H_gOnH8miuLhq_ygV1G3SDE8JDrwl3gNY9AaNklDE _9CZqe1.DMeweK4sjN31rbiC0DxiC5FRUx4BI9LYSMK.VqiDhcjiF.T0DtSY csiq22RrbE3o8CQeubHZLiH2ds3mgFBiQBnhjPSPPJN0yb0xCvRg.bCOSbqe mn44Z.1P.EMVsA0wNtnR3EYujoJP.BjbNwHc6rX6488Q5Ks6MVk3CLqKouea qVYJS1VhuApUm6enaHgU4s12QOn23gW93n2yYWhoM3jU6BsIk3w7LN8Ejzm0 p42nYcZgIyd78uVfm6nkGNi_3i8iuCuZvmwxeQaxsUxWAs7NU1HIBpCeXVPF 3KmfdrUF0xL5Fk0wqmQDkzzttmUlsuFgYXDyNRig82NulNA8fNPAsecKQFGI 9W0Xg_QJ.tTWMzKIC2ro8ZMN0TUpgpo.oHCp7bYn5kvjdM2f4yeTBzamCR6O .R9Qg6n.1uFoIsttNuoxPzPIEDPeTnTkRQ7VasZ_s.dS6oPxfVHk0GRJ0f0r hPLfpx9zxa6aGjWpscrUz4fqJT.Rka5R6idCOZvrbCdo2WUc3FJAGHnhiME4 bUxedYDvxnpabq3GYYGYqt8a3uFKNTUKrtBUMCmqGZbm4Gsd5oeVqgsrpci9 QnuSuT90DcyekFjQ8TCNus.GGkIt2kw-- Received: from [174.113.144.83] by web121205.mail.ne1.yahoo.com via HTTP; Tue, 07 Feb 2012 20:13:30 PST X-Mailer: YahooMailWebService/0.8.116.338427 References: <1327880657.10233.YahooMailNeo@web121206.mail.ne1.yahoo.com> <1328246840.72176.YahooMailNeo@web121204.mail.ne1.yahoo.com> <1328492928.7909.YahooMailNeo@web121201.mail.ne1.yahoo.com> <1328496741.60161.YahooMailNeo@web121205.mail.ne1.yahoo.com> Message-ID: <1328674410.4792.YahooMailNeo@web121205.mail.ne1.yahoo.com> Date: Tue, 7 Feb 2012 20:13:30 -0800 (PST) From: Hai Huang Reply-To: Hai Huang Subject: Re: Issue on running examples To: "common-dev@hadoop.apache.org" In-Reply-To: <1328496741.60161.YahooMailNeo@web121205.mail.ne1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Figured out the issue. It is due to=A0incorrectly using=A0=A0parameters=A0 = in "-Dmapreduce.job.user.name=3D$USER -Dmapreduce.clientfactory.class.name= =3Dorg.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwriter.byte= spermap=3D10000 -Ddfs.blocksize=3D536870912 -Ddfs.block.size=3D536870912 -l= ibjars ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.24.0-SN= APSHOT.jar "=0A=A0=0AIf I cancel them , then the example run=A0, but=A0stil= l got some exceptions=A0thrown in the result, for example=0A=A0=0A"=0A2012-= 02-07 20:05:25,202 WARN=A0 mapreduce.Job (Job.java:getTaskLogs(1460)) - Err= or reading task output Server returned HTTP response code: 400 for URL: htt= p://localhost:8080/tasklog?plaintext=3Dtrue&attemptid=3Dattempt_13286728955= 60_0002_m_000003_1&filter=3Dstdout=0A"=A0=0AThe link above displayed page w= ith "Required param job, map and reduce"=0A=0AI am going to=A0check=A0 more= =A0of them.=0A=0AHai=0A=A0=0A----- Original Message -----=0AFrom: Hai Huang= =0ATo: "common-dev@hadoop.apache.org" =0ACc: =0ASent: Sunday, February 5, 2012 9:52:21 PM=0ASubje= ct: Re: Issue on running examples=0A=0AI am doing following steps to runnin= g a example=A0-- randomwriter=0A=A0=0A1.=A0=A0=A0=A0 sbin/hadoop-daemon.sh = start namenode-Dmapreduce.job.user.name=3D$USER -Dmapreduce.clientfactory.c= lass.name=3Dorg.apache.hadoop.mapred.YarnClientFactory -Dmapreduce.randomwr= iter.bytespermap=3D10000 -Ddfs.blocksize=3D536870912 -Ddfs.block.size=3D536= 870912 -libjars ./share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-= 0.24.0-SNAPSHOT.jar=A0=0A=A0=0A2. =A0=A0=A0=A0sbin/hadoop-daemon.sh start d= atanode=0A=A0=0A3.=A0=A0=A0=A0bin/yarn-daemon.sh start resourcemanager=0A= =A0=0A4.=A0=A0=A0=A0bin/yarn-daemon.sh start nodemanager=0A=A0=0A5.=A0 ./bi= n/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-0.24.0-SNAP= SHOT.jar randomwriter output=0A=A0=0AThe step reported error message in bel= ow:=0A=A0=0A=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A=A0=0A= 2012-02-05 18:44:21,905 WARN=A0 conf.Configuration (Configuration.java:set(= 639)) - mapred.used.genericoptionsparser is deprecated. Instead, use mapred= uce.client.genericoptionsparser.used=0ARunning 10 maps.=0AJob started: Sun = Feb 05 18:44:22 PST 2012=0A2012-02-05 18:44:22,512 WARN=A0 conf.Configurati= on (Configuration.java:handleDeprecation(326)) - fs.default.name is depreca= ted. Instead, use fs.defaultFS=0A2012-02-05 18:44:22,618 WARN=A0 hdfs.DFSCl= ient (DFSOutputStream.java:run(549)) - DataStreamer Exception=0Ajava.io.IOE= xception: java.io.IOException: File /tmp/hadoop-yarn/staging/hai/.staging/j= ob_1328468416955_0007/libjars/hadoop-mapreduce-client-jobclient-0.24.0-SNAP= SHOT.jar could only be replicated to 0 nodes instead of minReplication (=3D= 1).=A0 There are 1 datanode(s) running and no node(s) are excluded in this = operation.=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockma= nagement.BlockManager.chooseTarget(BlockManager.java:1145)=0A=A0=A0=A0=A0= =A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditio= nalBlock(FSNamesystem.java:1540)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hado= op.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:4= 77)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientName= nodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSid= eTranslatorPB.java:346)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.p= rotocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlo= ckingMethod(ClientNamenodeProtocolProtos.java:42602)=0A=A0=A0=A0=A0=A0=A0= =A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.ca= ll(ProtobufRpcEngine.java:439)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop= .ipc.RPC$Server.call(RPC.java:862)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.ha= doop.ipc.Server$Handler$1.run(Server.java:1608)=0A=A0=A0=A0=A0=A0=A0=A0 at = org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1604)=0A=A0=A0=A0=A0= =A0=A0=A0 at java.security.AccessController.doPrivileged(Native Method)=0A= =A0=A0=A0=A0=A0=A0=A0 at javax.security.auth.Subject.doAs(Subject.java:416)= =0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.security.UserGroupInformation= .doAs(UserGroupInformation.java:1177)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache= .hadoop.ipc.Server$Handler.run(Server.java:1602)=0A=A0=A0=A0=A0=A0=A0=A0 at= org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.j= ava:203)=0A=A0=A0=A0=A0=A0=A0=A0 at $Proxy10.addBlock(Unknown Source)=0A=A0= =A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native M= ethod)=0A=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invo= ke(NativeMethodAccessorImpl.java:57)=0A=A0=A0=A0=A0=A0=A0=A0 at sun.reflect= .DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)= =0A=A0=A0=A0=A0=A0=A0=A0 at java.lang.reflect.Method.invoke(Method.java:616= )=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInvocationHand= ler.invokeMethod(RetryInvocationHandler.java:127)=0A=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:81)=0A=A0=A0=A0=A0=A0=A0=A0 at $Proxy10.addBlock(Unknown Source= )=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNameno= deProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:355= )=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStr= eamer.locateFollowingBlock(DFSOutputStream.java:1097)=0A=A0=A0=A0=A0=A0=A0= =A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputS= tream(DFSOutputStream.java:973)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoo= p.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:455)=0A2012-02= -05 18:44:22,620 INFO=A0 mapreduce.JobSubmitter (JobSubmitter.java:submitJo= bInternal(388)) - Cleaning up the staging area /tmp/hadoop-yarn/staging/hai= /.staging/job_1328468416955_0007=0A2012-02-05 18:44:22,626 ERROR security.U= serGroupInformation (UserGroupInformation.java:doAs(1180)) - PriviledgedAct= ionException as:hai (auth:SIMPLE) cause:java.io.IOException: java.io.IOExce= ption: File /tmp/hadoop-yarn/staging/hai/.staging/job_1328468416955_0007/li= bjars/hadoop-mapreduce-client-jobclient-0.24.0-SNAPSHOT.jar could only be r= eplicated to 0 nodes instead of minReplication (=3D1).=A0 There are 1 datan= ode(s) running and no node(s) are excluded in this operation.=0A=A0=A0=A0= =A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.= chooseTarget(BlockManager.java:1145)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.= hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.ja= va:1540)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.= NameNodeRpcServer.addBlock(NameNodeRpcServer.java:477)=0A=A0=A0=A0=A0=A0=A0= =A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTr= anslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:346)= =0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocol.proto.ClientNam= enodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamen= odeProtocolProtos.java:42602)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.= ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java= :439)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.= java:862)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1= .run(Server.java:1608)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Ser= ver$Handler$1.run(Server.java:1604)=0A=A0=A0=A0=A0=A0=A0=A0 at java.securit= y.AccessController.doPrivileged(Native Method)=0A=A0=A0=A0=A0=A0=A0=A0 at j= avax.security.auth.Subject.doAs(Subject.java:416)=0A=A0=A0=A0=A0=A0=A0=A0 a= t org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation= .java:1177)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler= .run(Server.java:1602)=0Ajava.io.IOException: java.io.IOException: File /tm= p/hadoop-yarn/staging/hai/.staging/job_1328468416955_0007/libjars/hadoop-ma= preduce-client-jobclient-0.24.0-SNAPSHOT.jar could only be replicated to 0 = nodes instead of minReplication (=3D1).=A0 There are 1 datanode(s) running = and no node(s) are excluded in this operation.=0A=A0=A0=A0=A0=A0=A0=A0 at o= rg.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(Bloc= kManager.java:1145)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.serve= r.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1540)=0A=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer= .addBlock(NameNodeRpcServer.java:477)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache= .hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlo= ck(ClientNamenodeProtocolServerSideTranslatorPB.java:346)=0A=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProto= s$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.= java:42602)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEng= ine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:439)=0A=A0=A0=A0= =A0=A0=A0=A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:862)=0A=A0= =A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.jav= a:1608)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler$1.r= un(Server.java:1604)=0A=A0=A0=A0=A0=A0=A0=A0 at java.security.AccessControl= ler.doPrivileged(Native Method)=0A=A0=A0=A0=A0=A0=A0=A0 at javax.security.a= uth.Subject.doAs(Subject.java:416)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.ha= doop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)=0A= =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.ja= va:1602)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine= $Invoker.invoke(ProtobufRpcEngine.java:203)=0A=A0=A0=A0=A0=A0=A0=A0 at $Pro= xy10.addBlock(Unknown Source)=0A=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.Native= MethodAccessorImpl.invoke0(Native Method)=0A=A0=A0=A0=A0=A0=A0=A0 at sun.re= flect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)=0A= =A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(De= legatingMethodAccessorImpl.java:43)=0A=A0=A0=A0=A0=A0=A0=A0 at java.lang.re= flect.Method.invoke(Method.java:616)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.= hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.= java:127)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.io.retry.RetryInvoca= tionHandler.invoke(RetryInvocationHandler.java:81)=0A=A0=A0=A0=A0=A0=A0=A0 = at $Proxy10.addBlock(Unknown Source)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.= hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNa= menodeProtocolTranslatorPB.java:355)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.= hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStre= am.java:1097)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputSt= ream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:973)=0A=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(= DFSOutputStream.java:455)=0A2012-02-05 18:44:22,629 ERROR hdfs.DFSClient (D= FSClient.java:closeAllFilesBeingWritten(435)) - Failed to close file /tmp/h= adoop-yarn/staging/hai/.staging/job_1328468416955_0007/libjars/hadoop-mapre= duce-client-jobclient-0.24.0-SNAPSHOT.jar=0Ajava.io.IOException: java.io.IO= Exception: File /tmp/hadoop-yarn/staging/hai/.staging/job_1328468416955_000= 7/libjars/hadoop-mapreduce-client-jobclient-0.24.0-SNAPSHOT.jar could only = be replicated to 0 nodes instead of minReplication (=3D1).=A0 There are 1 d= atanode(s) running and no node(s) are excluded in this operation.=0A=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManag= er.chooseTarget(BlockManager.java:1145)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apac= he.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem= .java:1540)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.nameno= de.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:477)=0A=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSid= eTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:34= 6)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.protocol.proto.ClientN= amenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNam= enodeProtocolProtos.java:42602)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoo= p.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.ja= va:439)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.RPC$Server.call(RP= C.java:862)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handler= $1.run(Server.java:1608)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.S= erver$Handler$1.run(Server.java:1604)=0A=A0=A0=A0=A0=A0=A0=A0 at java.secur= ity.AccessController.doPrivileged(Native Method)=0A=A0=A0=A0=A0=A0=A0=A0 at= javax.security.auth.Subject.doAs(Subject.java:416)=0A=A0=A0=A0=A0=A0=A0=A0= at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformati= on.java:1177)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server$Handl= er.run(Server.java:1602)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.P= rotobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:203)=0A=A0=A0=A0=A0= =A0=A0=A0 at $Proxy10.addBlock(Unknown Source)=0A=A0=A0=A0=A0=A0=A0=A0 at s= un.reflect.NativeMethodAccessorImpl.invoke0(Native Method)=0A=A0=A0=A0=A0= =A0=A0=A0 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccess= orImpl.java:57)=0A=A0=A0=A0=A0=A0=A0=A0 at sun.reflect.DelegatingMethodAcce= ssorImpl.invoke(DelegatingMethodAccessorImpl.java:43)=0A=A0=A0=A0=A0=A0=A0= =A0 at java.lang.reflect.Method.invoke(Method.java:616)=0A=A0=A0=A0=A0=A0= =A0=A0 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(Re= tryInvocationHandler.java:127)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop= .io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:81)=0A= =A0=A0=A0=A0=A0=A0=A0 at $Proxy10.addBlock(Unknown Source)=0A=A0=A0=A0=A0= =A0=A0=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTransl= atorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:355)=0A=A0=A0=A0=A0= =A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFoll= owingBlock(DFSOutputStream.java:1097)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache= .hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputSt= ream.java:973)=0A=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.DFSOutputS= tream$DataStreamer.run(DFSOutputStream.java:455)=0A=A0=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0A=A0=0AAny ideas about this issue ? Also= , I tried to used following command=0A=A0=0Asbin/stop-dfs.sh=0Aand got iss= ues of =0A=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0AStoppi= ng namenodes on [localhost]=0Alocalhost: stopping namenode=0Acat: /home/hai= /hadoop-common/hadoop-dist/target/hadoop-0.24.0-SNAPSHOT/conf//slaves: No s= uch file or directory=0ASecondary namenodes are not configured.=A0 Cannot s= top secondary namenodes.=0A=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=0A=A0=0AAnyway, the doc in the http://hadoop.apache.org/common/docs/= stable/single_node_setup.html=A0looks like out-of-date since the command "$= sbin/start-all.sh" reported "This script is Deprecated. Instead use start-d= fs.sh and start-mapred.sh "=0A=A0=0A=A0=0AThanks,=0A=A0=0AHai