hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From José Luis Larroque <larroques...@gmail.com>
Subject Can't run hadoop examples with YARN Single node cluster
Date Sat, 16 Jan 2016 18:07:19 GMT
Hi there, i'm currently running a single node yarn cluster, hadoop 2.4.0,
and for some reason, i can't execute even a example that comes with map
reduce (grep, wordcount, etc). With this line i execute grep:

    $HADOOP_HOME/bin/yarn jar
/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-mapreduce-examples-2.4.0.jar
grep input output2 'dfs[a-z.]+'

This cluster was previosly running Giraph programs, but rigth now i need a
Map Reduce application, so i switched it back to pure yarn.

All failed containers had the same error:

    Container: container_1452447718890_0001_01_000002 on localhost_37976
    ======================================================================
    LogType: stderr
    LogLength: 45
    Log Contents:
*    Error: Could not find or load main class 256*

Main logs:

    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/yarn/lib/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in
[jar:file:/usr/local/hadoop/share/hadoop/mapreduce/giraph-examples-1.1.0-for-hadoop-2.4.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    16/01/15 21:53:50 WARN util.NativeCodeLoader: Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
    16/01/15 21:53:50 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
    16/01/15 21:53:51 WARN mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or Job#setJar(String).
    16/01/15 21:53:51 INFO input.FileInputFormat: Total input paths to
process : 1
    16/01/15 21:53:52 INFO mapreduce.JobSubmitter: number of splits:1
    16/01/15 21:53:52 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1452905418747_0001
    16/01/15 21:53:53 INFO mapred.YARNRunner: Job jar is not present. Not
adding any jar to the list of resources.
    16/01/15 21:53:53 INFO impl.YarnClientImpl: Submitted application
application_1452905418747_0001
    16/01/15 21:53:54 INFO mapreduce.Job: The url to track the job:
http://localhost:8088/proxy/application_1452905418747_0001/
    16/01/15 21:53:54 INFO mapreduce.Job: Running job:
job_1452905418747_0001
    16/01/15 21:54:04 INFO mapreduce.Job: Job job_1452905418747_0001
running in uber mode : false
    16/01/15 21:54:04 INFO mapreduce.Job:  map 0% reduce 0%
    16/01/15 21:54:07 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0001_m_000000_0, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:11 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0001_m_000000_1, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:15 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0001_m_000000_2, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:21 INFO mapreduce.Job:  map 100% reduce 100%
    16/01/15 21:54:21 INFO mapreduce.Job: Job job_1452905418747_0001 failed
with state FAILED due to: Task failed task_1452905418747_0001_m_000000
    Job failed as tasks failed. failedMaps:1 failedReduces:0

    16/01/15 21:54:21 INFO mapreduce.Job: Counters: 12
        Job Counters
            Failed map tasks=4
            Launched map tasks=4
            Other local map tasks=3
            Data-local map tasks=1
            Total time spent by all maps in occupied slots (ms)=15548
            Total time spent by all reduces in occupied slots (ms)=0
            Total time spent by all map tasks (ms)=7774
            Total vcore-seconds taken by all map tasks=7774
            Total megabyte-seconds taken by all map tasks=3980288
        Map-Reduce Framework
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0
    16/01/15 21:54:21 INFO client.RMProxy: Connecting to ResourceManager at
hdnode01/192.168.0.10:8050
    16/01/15 21:54:22 WARN mapreduce.JobSubmitter: No job jar file set.
User classes may not be found. See Job or Job#setJar(String).
    16/01/15 21:54:22 INFO input.FileInputFormat: Total input paths to
process : 0
    16/01/15 21:54:22 INFO mapreduce.JobSubmitter: number of splits:0
    16/01/15 21:54:22 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1452905418747_0002
    16/01/15 21:54:22 INFO mapred.YARNRunner: Job jar is not present. Not
adding any jar to the list of resources.
    16/01/15 21:54:22 INFO impl.YarnClientImpl: Submitted application
application_1452905418747_0002
    16/01/15 21:54:22 INFO mapreduce.Job: The url to track the job:
http://localhost:8088/proxy/application_1452905418747_0002/
    16/01/15 21:54:22 INFO mapreduce.Job: Running job:
job_1452905418747_0002
    16/01/15 21:54:32 INFO mapreduce.Job: Job job_1452905418747_0002
running in uber mode : false
    16/01/15 21:54:32 INFO mapreduce.Job:  map 0% reduce 0%
    16/01/15 21:54:36 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0002_r_000000_0, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:41 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0002_r_000000_1, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:46 INFO mapreduce.Job: Task Id :
attempt_1452905418747_0002_r_000000_2, Status : FAILED
    Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
    org.apache.hadoop.util.Shell$ExitCodeException:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:505)
        at org.apache.hadoop.util.Shell.run(Shell.java:418)
        at
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
        at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 1

    16/01/15 21:54:51 INFO mapreduce.Job:  map 0% reduce 100%
    16/01/15 21:54:52 INFO mapreduce.Job: Job job_1452905418747_0002 failed
with state FAILED due to: Task failed task_1452905418747_0002_r_000000
    Job failed as tasks failed. failedMaps:0 failedReduces:1

    16/01/15 21:54:52 INFO mapreduce.Job: Counters: 10
        Job Counters
            Failed reduce tasks=4
            Launched reduce tasks=4
            Total time spent by all maps in occupied slots (ms)=0
            Total time spent by all reduces in occupied slots (ms)=11882
            Total time spent by all reduce tasks (ms)=5941
            Total vcore-seconds taken by all reduce tasks=5941
            Total megabyte-seconds taken by all reduce tasks=3041792
        Map-Reduce Framework
            CPU time spent (ms)=0
            Physical memory (bytes) snapshot=0
            Virtual memory (bytes) snapshot=0

I switched mapreduce.framework.name from:

<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

To:

<property>
<name>mapreduce.framework.name</name>
<value>local</value>
</property>

and grep and other mapreduce jobs are working again.

I don't understand why with *"yarn"* value in *mapreduce.framework.name
<http://mapreduce.framework.name>* doesn't work, and without it (using
"local") does.

Any idea how to fix this without switching the value of
mapreduce.framework.name?


Bye!
Jose

Mime
View raw message