hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gaurav Gupta <gaurav.gopi...@gmail.com>
Subject Re: Can't run hadoop examples with YARN Single node cluster
Date Wed, 20 Jan 2016 00:30:37 GMT
Hi,

I think your yarn is not up and running so you are not able to run the
jobs. Can you please verify it?

Thanks


On Sat, Jan 16, 2016 at 3:18 PM, Namikaze Minato <lloydsensei@gmail.com>
wrote:

> Hi again José Luis.
>
> Sorry, I was specifically talking about the
> "org.apache.hadoop.util.Shell$ExitCodeException" error.
> Can you provide the logs for a wordcount please?
> Also, do you have yarn running?
> I have never tweaked the mapreduce.framework.name value, so I might
> not be able to help you further, but these pieces of information might
> help the people who can.
>
> Regards,
> LLoyd
>
> On 17 January 2016 at 00:07, José Luis Larroque <larroquester@gmail.com>
> wrote:
> > Thanks for your answer Lloyd!
> >
> > I'm not sure about that. Wordcount, of the same jar, gives me the same
> > error, and also my own map reduce job.
> >
> > I believe that the " Error: Could not find or load main class 256" error
> is
> > happening because it's not finding the mapper, but i'm not sure.
> >
> > Bye!
> > Jose
> >
> >
> > 2016-01-16 19:41 GMT-03:00 Namikaze Minato <lloydsensei@gmail.com>:
> >>
> >> Hello José Luis Larroque.
> >>
> >> Your problem here is only that grep is returning a non-zero exit code
> >> when no occurences are found.
> >> I know that for spark-streaming, using the option "-jobconf
> >> stream.non.zero.exit.is.failure=false" solves the problem, but I don't
> >> know how hadoop-mapreduce-examples-2.4.0.jar handles this.
> >>
> >> Regards,
> >> LLoyd
> >>
> >> On 16 January 2016 at 19:07, José Luis Larroque <larroquester@gmail.com
> >
> >> wrote:
> >> > Hi there, i'm currently running a single node yarn cluster, hadoop
> >> > 2.4.0,
> >> > and for some reason, i can't execute even a example that comes with
> map
> >> > reduce (grep, wordcount, etc). With this line i execute grep:
> >> >
> >> >     $HADOOP_HOME/bin/yarn jar
> >> >
> >> >
> /usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
> >> > grep input output2 'dfs[a-z.]+'
> >> >
> >> > This cluster was previosly running Giraph programs, but rigth now i
> need
> >> > a
> >> > Map Reduce application, so i switched it back to pure yarn.
> >> >
> >> > All failed containers had the same error:
> >> >
> >> >     Container: container_1452447718890_0001_01_000002 on
> localhost_37976
> >> >
> >> > ======================================================================
> >> >     LogType: stderr
> >> >     LogLength: 45
> >> >     Log Contents:
> >> >     Error: Could not find or load main class 256
> >> >
> >> > Main logs:
> >> >
> >> >     SLF4J: Class path contains multiple SLF4J bindings.
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: Found binding in
> >> >
> >> >
> [jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> >     SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for
> an
> >> > explanation.
> >> >     SLF4J: Actual binding is of type
> [org.slf4j.impl.Log4jLoggerFactory]
> >> >     16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
> >> > native-hadoop library for your platform... using builtin-java classes
> >> > where
> >> > applicable
> >> >     16/01/15 21:53:50 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
> >> > process : 1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
> >> >     16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0001
> >> >     16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0001
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0001/
> >> >     16/01/15 21:53:54 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0001
> >> >     16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0001_m_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0001_m_000000
> >> >     Job failed as tasks failed. failedMaps:1 failedReduces:0
> >> >
> >> >     16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
> >> >         Job Counters
> >> >             Failed map tasks=4
> >> >             Launched map tasks=4
> >> >             Other local map tasks=3
> >> >             Data-local map tasks=1
> >> >             Total time spent by all maps in occupied slots (ms)=15548
> >> >             Total time spent by all reduces in occupied slots (ms)=0
> >> >             Total time spent by all map tasks (ms)=7774
> >> >             Total vcore-seconds taken by all map tasks=7774
> >> >             Total megabyte-seconds taken by all map tasks=3980288
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >     16/01/15 21:54:21 INFO client.RMProxy: Connecting to
> ResourceManager
> >> > at
> >> > hdnode01/192.168.0.10:8050
> >> >     16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file
> set.
> >> > User classes may not be found. See Job or Job#setJar(String).
> >> >     16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
> >> > process : 0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
> >> >     16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens
> for
> >> > job: job_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present.
> >> > Not
> >> > adding any jar to the list of resources.
> >> >     16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
> >> > application_1452905418747_0002
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
> >> > http://localhost:8088/proxy/application_1452905418747_0002/
> >> >     16/01/15 21:54:22 INFO mapreduce.Job: Running job:
> >> > job_1452905418747_0002
> >> >     16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > running
> >> > in uber mode : false
> >> >     16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
> >> >     16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_0, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_1, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
> >> > attempt_1452905418747_0002_r_000000_2, Status : FAILED
> >> >     Exception from container-launch:
> >> > org.apache.hadoop.util.Shell$ExitCodeException:
> >> >     org.apache.hadoop.util.Shell$ExitCodeException:
> >> >         at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
> >> >         at org.apache.hadoop.util.Shell.run(Shell.java:418)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> >> >         at
> >> >
> >> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> >> >         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >> >         at
> >> >
> >> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >> >         at java.lang.Thread.run(Thread.java:745)
> >> >
> >> >
> >> >     Container exited with a non-zero exit code 1
> >> >
> >> >     16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002
> >> > failed
> >> > with state FAILED due to: Task failed task_1452905418747_0002_r_000000
> >> >     Job failed as tasks failed. failedMaps:0 failedReduces:1
> >> >
> >> >     16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
> >> >         Job Counters
> >> >             Failed reduce tasks=4
> >> >             Launched reduce tasks=4
> >> >             Total time spent by all maps in occupied slots (ms)=0
> >> >             Total time spent by all reduces in occupied slots
> (ms)=11882
> >> >             Total time spent by all reduce tasks (ms)=5941
> >> >             Total vcore-seconds taken by all reduce tasks=5941
> >> >             Total megabyte-seconds taken by all reduce tasks=3041792
> >> >         Map-Reduce Framework
> >> >             CPU time spent (ms)=0
> >> >             Physical memory (bytes) snapshot=0
> >> >             Virtual memory (bytes) snapshot=0
> >> >
> >> > I switched mapreduce.framework.name from:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>yarn</value>
> >> > </property>
> >> >
> >> > To:
> >> >
> >> > <property>
> >> > <name>mapreduce.framework.name</name>
> >> > <value>local</value>
> >> > </property>
> >> >
> >> > and grep and other mapreduce jobs are working again.
> >> >
> >> > I don't understand why with "yarn" value in mapreduce.framework.name
> >> > doesn't
> >> > work, and without it (using "local") does.
> >> >
> >> > Any idea how to fix this without switching the value of
> >> > mapreduce.framework.name?
> >> >
> >> >
> >> >
> >> > Bye!
> >> > Jose
> >> >
> >> >
> >> >
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: user-help@hadoop.apache.org
>
>

Mime
View raw message