hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From francexo83 <francex...@gmail.com>
Subject Re: MR job fails with too many mappers
Date Tue, 18 Nov 2014 16:46:48 GMT
Hi Tsuyoshi,

these are the configurations you requested:

yarn.app.mapreduce.am.resource.mb=256

mapreduce.map.memory.mb=Not set
mapreduce.reduce.memory.mb=Not set
mapreduce.map.java.opts=Not set
mapreduce.reduce.java.opts=Not set


thanks

2014-11-18 17:01 GMT+01:00 Tsuyoshi OZAWA <ozawa.tsuyoshi@gmail.com>:

> Hi,
>
> Could you share following configurations? It can be failures because
> of out of memory at mapper side.
>
> yarn.app.mapreduce.am.resource.mb
> mapreduce.map.memory.mb
> mapreduce.reduce.memory.mb
> mapreduce.map.java.opts
> mapreduce.reduce.java.opts
>
> On Wed, Nov 19, 2014 at 12:23 AM, francexo83 <francexo83@gmail.com> wrote:
> > Hi All,
> >
> > I have a small  hadoop cluster with three nodes and HBase 0.98.1
> installed
> > on it.
> >
> > The hadoop version is 2.3.0 and below my use case scenario.
> >
> > I wrote a map reduce program that reads data from an hbase table and does
> > some transformations on these data.
> > Jobs are very simple so they didn't need the  reduce phase. I also wrote
> a
> > TableInputFormat  extension in order to maximize the number of concurrent
> > maps on the cluster.
> > In other words, each  row should be processed by a single map task.
> >
> > Everything goes well until the number of rows and consequently  mappers
> > exceeds 300000 quota.
> >
> > This is the only exception I see when the job fails:
> >
> > Application application_1416304409718_0032 failed 2 times due to AM
> > Container for appattempt_1416304409718_0032_000002 exited with exitCode:
> 1
> > due to:
> >
> >
> > Exception from container-launch:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> > org.apache.hadoop.util.Shell$ExitCodeException:
> > at org.apache.hadoop.util.Shell.runCommand(Shell.java:511)
> > at org.apache.hadoop.util.Shell.run(Shell.java:424)
> > at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:656)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> > Container exited with a non-zero exit code 1
> >
> >
> > Cluster configuration details:
> > Node1: 12 GB, 4 core
> > Node2: 6 GB, 4 core
> > Node3: 6 GB, 4 core
> >
> > yarn.scheduler.minimum-allocation-mb=2048
> > yarn.scheduler.maximum-allocation-mb=4096
> > yarn.nodemanager.resource.memory-mb=6144
> >
> >
> >
> > Regards
>
>
>
> --
> - Tsuyoshi
>

Mime
View raw message