Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9EEEE10929 for ; Thu, 10 Apr 2014 11:44:44 +0000 (UTC) Received: (qmail 50329 invoked by uid 500); 10 Apr 2014 11:44:36 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 50240 invoked by uid 500); 10 Apr 2014 11:44:32 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 50225 invoked by uid 99); 10 Apr 2014 11:44:30 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Apr 2014 11:44:30 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of smart.rahul.iiit@gmail.com designates 74.125.82.178 as permitted sender) Received: from [74.125.82.178] (HELO mail-we0-f178.google.com) (74.125.82.178) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Apr 2014 11:44:25 +0000 Received: by mail-we0-f178.google.com with SMTP id u56so3840146wes.9 for ; Thu, 10 Apr 2014 04:44:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=wmx+N7bE6c6HCGS61XVvUcI+GtkzU4IMf28aEhqpuJc=; b=tKOL24twM5NV2X17RhVkB2CZWVv9+5wzOTGT+j7Z8wMM5qym8FoqpGt2TwpL7fp3kz lGWiBfD8/U5U8XcWvNXq8N17ERJXSQRRW+FMN4r0wCCG8p1d5yAiwaZWTtwW8F96QqNR svnnQy75XWRL3ZsAQtZ/MvDNg/GWNTyAEB+uv+HsKyE5gYtHxjRBx1qT7QkUY3MEgfQh 3IfCXl16OunZG37SjQ/Ynw6bn8Ui3W5Zk/dGTQlN6z9l/k6D2xeWAwIBdAdP3XStkKZ0 2pHq2uxPgRByyqOzpeXLidgV8XqzcwzcE/FPsAy6SOb3YlkWB2b/qtm/pPgYPRgoGlx7 enYw== MIME-Version: 1.0 X-Received: by 10.194.24.74 with SMTP id s10mr14572175wjf.43.1397130244486; Thu, 10 Apr 2014 04:44:04 -0700 (PDT) Received: by 10.227.94.6 with HTTP; Thu, 10 Apr 2014 04:44:04 -0700 (PDT) In-Reply-To: References: Date: Thu, 10 Apr 2014 17:14:04 +0530 Message-ID: Subject: Re: not able to run map reduce job example on aws machine From: Rahul Singh To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b5d2542ed7d3f04f6aebb8d X-Virus-Checked: Checked by ClamAV on apache.org --047d7b5d2542ed7d3f04f6aebb8d Content-Type: text/plain; charset=ISO-8859-1 here is my mapred.site.xml config mapred.job.tracker localhost:54311 The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. Also, The job runs fine in memory, if i remove the dependency on yarn, i.e. if i comment out: mapreduce.framework.name yarn in mapred-site.xml. On Thu, Apr 10, 2014 at 4:43 PM, Kiran Dangeti wrote: > Rahul, > > Please check the port name given in mapred.site.xml > Thanks > Kiran > > On Thu, Apr 10, 2014 at 3:23 PM, Rahul Singh wrote: > >> Hi, >> I am getting following exception while running word count example, >> >> 14/04/10 15:17:09 INFO mapreduce.Job: Task Id : >> attempt_1397123038665_0001_m_000000_2, Status : FAILED >> Container launch failed for container_1397123038665_0001_01_000004 : >> java.lang.IllegalArgumentException: Does not contain a valid host:port >> authority: poc_hadoop04:46162 >> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:211) >> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:163) >> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:152) >> at >> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:210) >> at >> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.(ContainerManagementProtocolProxy.java:196) >> at >> org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:117) >> at >> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:403) >> at >> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138) >> at >> org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:369) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) >> at java.lang.Thread.run(Thread.java:662) >> >> >> I have everything configured with hdfs running where i am able to create >> files and directories. running jps on my machine shows all components >> running. >> >> 10290 NameNode >> 10416 DataNode >> 10738 ResourceManager >> 11634 Jps >> 10584 SecondaryNameNode >> 10844 NodeManager >> >> >> Any pointers will be appreciated. >> >> Thanks and Regards, >> -Rahul Singh >> > > --047d7b5d2542ed7d3f04f6aebb8d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
here is my mapred.site.xml config

<property= >
=A0 <name>mapred.job.tracker</name>
=A0 <value>= ;localhost:54311</value>
=A0 <description>The host and port = that the MapReduce job tracker runs
=A0 at.=A0 If "local", then jobs are run in-process as a single m= ap
=A0 and reduce task.
=A0 </description>
</property>=


Also, The job runs fine in memory, if i remove the d= ependency on yarn, i.e. if i comment out:
<property>
=A0 <name> mapreduce.framework.name</name>
=A0 <value>yarn<= ;/value>
</property>

in mapred-site.xml.
<= br>


On Thu, Apr 10, 2014 at 4:43 PM, Kiran Dangeti <kirandkumar2013= @gmail.com> wrote:
Rahul,
=A0
Please check the port name given in mapred.site.xml
Thanks
Kiran

On Thu, Apr 10, 2014 at 3:23 PM, Rahul Singh <smart.rahul.iiit@gmail.com> wrote:
Hi,
=A0 I am getting following exception while running word c= ount example,

14/04/10 15:17:09 INFO mapreduce.Job: Task Id : attempt_1397123038= 665_0001_m_000000_2, Status : FAILED
Container launch failed for contain= er_1397123038665_0001_01_000004 : java.lang.IllegalArgumentException: Does = not contain a valid host:port authority: poc_hadoop04:46162
=A0=A0=A0 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:= 211)
=A0=A0=A0 at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUti= ls.java:163)
=A0=A0=A0 at org.apache.hadoop.net.NetUtils.createSocketAdd= r(NetUtils.java:152)
=A0=A0=A0 at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProt= ocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagement= ProtocolProxy.java:210)
=A0=A0=A0 at org.apache.hadoop.yarn.client.api.i= mpl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.&= lt;init>(ContainerManagementProtocolProxy.java:196)
=A0=A0=A0 at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProt= ocolProxy.getProxy(ContainerManagementProtocolProxy.java:117)
=A0=A0=A0 = at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMP= roxy(ContainerLauncherImpl.java:403)
=A0=A0=A0 at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherI= mpl$Container.launch(ContainerLauncherImpl.java:138)
=A0=A0=A0 at org.ap= ache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.= run(ContainerLauncherImpl.java:369)
=A0=A0=A0 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadP= oolExecutor.java:886)
=A0=A0=A0 at java.util.concurrent.ThreadPoolExecut= or$Worker.run(ThreadPoolExecutor.java:908)
=A0=A0=A0 at java.lang.Thread= .run(Thread.java:662)


I have everything configured with hdfs=A0 running where i am able to c= reate files and directories. running jps on my machine shows all components= running.

10290 NameNode
10416 DataNode
10738 ResourceManager<= br> 11634 Jps
10584 SecondaryNameNode
10844 NodeManager


Any pointers will be appreciated.

Thanks and Regards,
-Rahul Singh


--047d7b5d2542ed7d3f04f6aebb8d--