Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8ADF2100E9 for ; Fri, 12 Jul 2013 07:27:35 +0000 (UTC) Received: (qmail 75812 invoked by uid 500); 12 Jul 2013 07:27:29 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 75663 invoked by uid 500); 12 Jul 2013 07:27:24 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 75656 invoked by uid 99); 12 Jul 2013 07:27:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Jul 2013 07:27:22 +0000 X-ASF-Spam-Status: No, hits=-5.0 required=5.0 tests=RCVD_IN_DNSWL_HI,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of raymond.liu@intel.com designates 134.134.136.24 as permitted sender) Received: from [134.134.136.24] (HELO mga09.intel.com) (134.134.136.24) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Jul 2013 07:27:16 +0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP; 12 Jul 2013 00:24:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.89,651,1367996400"; d="scan'208";a="344398610" Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228]) by orsmga001.jf.intel.com with ESMTP; 12 Jul 2013 00:26:56 -0700 Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 12 Jul 2013 00:26:55 -0700 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 12 Jul 2013 00:26:55 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.156]) by SHSMSX102.ccr.corp.intel.com ([169.254.2.81]) with mapi id 14.03.0123.003; Fri, 12 Jul 2013 15:26:53 +0800 From: "Liu, Raymond" To: "user@hadoop.apache.org" Subject: Failed to run wordcount on YARN Thread-Topic: Failed to run wordcount on YARN Thread-Index: Ac5+0HZf0iLROPYDTnetctKNB0bJWQ== Date: Fri, 12 Jul 2013 07:26:53 +0000 Message-ID: <391D65D0EBFC9B4B95E117F72A360F1A0106D179@SHSMSX101.ccr.corp.intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Hi=20 I just start to try out hadoop2.0, I use the 2.0.5-alpha package And follow=20 http://hadoop.apache.org/docs/r2.0.5-alpha/hadoop-project-dist/hadoop-commo= n/ClusterSetup.html to setup a cluster in non-security mode. HDFS works fine with client tools. While when I run wordcount example, there are errors : ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.0.5-a= lpha.jar wordcount /tmp /out 13/07/12 15:05:53 INFO mapreduce.Job: Task Id : attempt_1373609123233_0004_= m_000004_0, Status : FAILED Error: java.io.FileNotFoundException: Path is not a file: /tmp/hadoop-yarn at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFi= le.java:42) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLoca= tionsUpdateTimes(FSNamesystem.java:1317) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLoca= tionsInt(FSNamesystem.java:1276) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLoca= tions(FSNamesystem.java:1252) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLoca= tions(FSNamesystem.java:1225) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBloc= kLocations(NameNodeRpcServer.java:403) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi= deTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslator= PB.java:239) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProt= os$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos= .java:40728) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoke= r.call(ProtobufRpcEngine.java:454) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1741) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1737) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupIn= formation.java:1478) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1735) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Me= thod) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeCons= tructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Delega= tingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:525) at org.apache.hadoop.ipc.RemoteException.instantiateException(Remot= eException.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(Remo= teException.java:57) at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient= .java:986) at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java= :974) at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLa= stBlockLength(DFSInputStream.java:157) at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.ja= va:124) at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java= :117) at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1131) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFil= eSystem.java:244) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFil= eSystem.java:77) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:713) at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initializ= e(LineRecordReader.java:89) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initial= ize(MapTask.java:519) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:158) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupIn= formation.java:1478) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:153) I check the HDFS and found /tmp/hadoop-yarn is there , this dir's owner is = the same as the job user. And to ensure it works, I also create /tmp/hadoop-yarn on local fs. None of= it works. Any idea what might be the problem? Thx! Best Regards, Raymond Liu