Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B57DD10760 for ; Sat, 22 Mar 2014 16:04:33 +0000 (UTC) Received: (qmail 3578 invoked by uid 500); 22 Mar 2014 16:04:23 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 3135 invoked by uid 500); 22 Mar 2014 16:04:22 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 3125 invoked by uid 99); 22 Mar 2014 16:04:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 22 Mar 2014 16:04:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tonymullins.tm@gmail.com designates 209.85.217.182 as permitted sender) Received: from [209.85.217.182] (HELO mail-lb0-f182.google.com) (209.85.217.182) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 22 Mar 2014 16:04:17 +0000 Received: by mail-lb0-f182.google.com with SMTP id n15so2472426lbi.41 for ; Sat, 22 Mar 2014 09:03:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=sKfEgSVun+Lmd/b7XM4Nu/dBhooeuEhnDmPi5zSuE4k=; b=iJqZN78JnwGhFt7bcU3WcUBXYCY8dXjTPgeYkCx4umefm4GRhWqZ7TZ1wwicQLcCcX JAdQ6xVcMcL88AI3odgkyImibVCitPJ7mo+sW+q63ugwzzWZP98oqeyEWn3jjboSCLeX fIR562+5ZyB23vuUa5AKsKbHhJvkJZMmc4+paCaRKPl5EaN1PqrXhDB2jTtIrN/lVr1O LTtix0FLIZ5UIz/NXGqxmnsLZh+aJ5kP5Jz8QnmzosHsThzLmlqxnjlgGpxTw4Nrg9tl O478WVW3Zn+ZeP9XWQvqRN/976bjJqgLfG8wHElhhRixKE8xnf0ZLHCjE2Qj3HYuW5hI q/3g== MIME-Version: 1.0 X-Received: by 10.152.42.230 with SMTP id r6mr2097228lal.32.1395504236147; Sat, 22 Mar 2014 09:03:56 -0700 (PDT) Received: by 10.114.91.227 with HTTP; Sat, 22 Mar 2014 09:03:56 -0700 (PDT) Date: Sat, 22 Mar 2014 21:03:56 +0500 Message-ID: Subject: Yarn MapReduce Job Issue - AM Container launch error in Hadoop 2.3.0 From: Tony Mullins To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c36780473bdd04f5342645 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c36780473bdd04f5342645 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi, I have setup a 2 node cluster of Hadoop 2.3.0. Its working fine and I can successfully run distributedshell-2.2.0.jar example. But when I try to run any mapreduce job I get error. I have setup MapRed.xml and other configs for running MapReduce job according to ( http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-def= initive-guide) but I am getting following error : 14/03/22 20:31:17 INFO mapreduce.Job: Job job_1395502230567_0001 failed with state FAILED due to: Application application_1395502230567_0001 failed 2 times due to AM Container for appattempt_1395502230567_0001_000002 exited with exitCode: 1 due to: Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchCo= ntainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:283) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 615) at java.lang.Thread.run(Thread.java:744) Container exited with a non-zero exit code 1 .Failing this attempt.. Failing the application. 14/03/22 20:31:17 INFO mapreduce.Job: Counters: 0 Job ended: Sat Mar 22 20:31:17 PKT 2014 The job took 6 seconds. And if look at stderr (log of job) there is only one line *"Could not find or load main class 614"* Now I have googled it and usually this issues comes when you have different JAVA versions or in yarn-site.xml classpath is not properly set , my yarn-site.xml has this yarn.application.classpath /opt/yarn/hadoop-2.3.0/etc/hadoop,/opt/yarn/hadoop-2.3.0/*,/opt/= yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib= /*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2= .3.0/*,/opt/yarn/hadoop-2.3.0/lib/* So any other ideas what could be the issue here ? I am running my mapreduce job like this: $HADOOP_PREFIX/bin/hadoop jar $HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar randomwriter out Thanks, Tony --001a11c36780473bdd04f5342645 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi,

I have setup a 2 node cluster of Hadoop 2.3.0. Its working=20 fine and I can successfully run distributedshell-2.2.0.jar example. But=20 when I try to run any mapreduce job I get error. I have setup MapRed.xml and other configs for running MapReduce job according to (http://www.alexjf.net/blog/distributed-systems/h= adoop-yarn-installation-definitive-guide) but I am getting following er= ror :

14/03/22 20:31:17 INFO mapreduce.Job: Job job_1395502230567_0001=20 failed with state FAILED due to: Application=20 application_1395502230567_0001 failed 2 times due to AM Container for=20 appattempt_1395502230567_0001_000002 exited with exitCode: 1 due to:=20 Exception from container-launch:=20 org.apache.hadoop.util.Shell$ExitCodeException:=20 org.apache.hadoop.util.Shell$ExitCodeException:=20 at org.apache.hadoop.util.Shell.runCommand(Shell.java:505) at org.apache.hadoop.util.Shell.run(Shell.java:418) at=20 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650) at=20 org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchCo= ntainer(DefaultContainerExecutor.java:195) at=20 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:283) at=20 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at=20 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 145) at=20 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 615) at java.lang.Thread.run(Thread.java:744)

    Co=
ntainer exited with a non-<=
/span>zero exit code 1
    .Failing this attempt.. Faili=
ng the application.<=
span class=3D"">
    14/03/22 20:31:17 INFO mapreduce.=
Job: Counters: 0
    Job ended: Sat Mar 22 20<=
span class=3D"">:31:=
17 PKT 201=
4
    The job took 6 seconds.

And if look at stderr (log of job) there is only one line=20

"Could not find or load main c= lass 614"

Now I have googled it and usually this issues comes when you have=20 different JAVA versions or in yarn-site.xml classpath is not properly=20 set , my yarn-site.xml has this


<property>
<name>yarn.application.classpath&l= t;/name>
<value>/opt/yarn/hadoop-2.3.0/etc/hadoop,/opt/yarn= /hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2.3.0/*,/opt/= yarn/hadoop-2.3.0/lib/*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib= /*,/opt/yarn/hadoop-2.3.0/*,/opt/yarn/hadoop-2.3.0/lib/*</value>
</property>

So any other ideas what could be the issue here ?

I am running my mapreduce job like this:

$HADOOP_PREFIX/bin/hadoop jar $HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.<=
/span>0.jar randomwriter out

Thanks, Tony


--001a11c36780473bdd04f5342645--