hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohamed Riadh Trad <Mohamed.t...@inria.fr>
Subject Re: Mapreduce heap size error
Date Tue, 15 Nov 2011 01:23:17 GMT
try the -D mapred.child.java.opts=-Xmx4096M on the command line:

bin/hadoop jar yourjar.jar yourclass -D mapred.child.java.opts=-Xmx8219M ......................................

How many files do you have in your input folder?

Bests,

Trad Mohamed Riadh, M.Sc, Ing.
PhD. student
INRIA-TELECOM PARISTECH - ENPC School of International Management

Office: 11-15
Phone: (33)-1 39 63 59 33
Fax: (33)-1 39 63 56 74
Email: riadh.trad@inria.fr
Home page: http://www-rocq.inria.fr/who/Mohamed.Trad/




Le 14 nov. 2011 à 22:50, Hoot Thompson a écrit :

> Any suggestions as to how to track down the root cause of these errors?
> 
> 1178709 [main] INFO org.apache.hadoop.mapred.JobClient  -  map 6% reduce 0%
> 1178709 [main] INFO org.apache.hadoop.mapred.JobClient  -  map 6% reduce 0%
> 11/11/15 00:45:29 INFO mapred.JobClient: Task Id : attempt_201111150008_0002_r_000000_0,
Status : FAILED
> 1208771 [main] INFO org.apache.hadoop.mapred.JobClient  - Task Id : attempt_201111150008_0002_r_000000_0,
Status : FAILED
> 1208771 [main] INFO org.apache.hadoop.mapred.JobClient  - Task Id : attempt_201111150008_0002_r_000000_0,
Status : FAILED
> Error: java.lang.OutOfMemoryError: Java heap space
>     at org.apache.hadoop.mapred.IFile$Reader.readNextBlock(IFile.java:342)
>     at org.apache.hadoop.mapred.IFile$Reader.next(IFile.java:404)
>     at org.apache.hadoop.mapred.Merger$Segment.next(Merger.java:220)
>     at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:420)
>     at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:381)
>     at org.apache.hadoop.mapred.Merger.merge(Merger.java:60)
>     at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2651)
> 
> 
> On 11/13/11 6:34 PM, "Eric Fiala" <eric@fiala.ca> wrote:
> 
>> Hoot, these are big numbers - some thoughts
>> 1) does your machine have 1000GB to spare for each java child thread (each mapper
+ each reducer)?  mapred.child.java.opts / -Xmx1048576m
>> 2) does each of your daemons need / have 10G? HADOOP_HEAPSIZE=10000
>> 
>> hth
>> EF
>>>>>> # The maximum amount of heap to use, in MB. Default is 1000.
>>>>>>  export HADOOP_HEAPSIZE=10000
>>>>>> <name>mapred.child.java.opts</name>
>>>>>> <value>-Xmx1048576m</value>
>>>>>> </property>
>>>>>> 
>> 


Mime
View raw message