hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hoot Thompson <h...@ptpnow.com>
Subject Re: Mapreduce heap size error
Date Mon, 14 Nov 2011 00:23:38 GMT
I cranked those setting up in an attempt to solve the heap issues. Just to
verify, I restored the defaults and cycled both dfs and mapred daemons.
Still getting same error.


On 11/13/11 6:34 PM, "Eric Fiala" <eric@fiala.ca> wrote:

> Hoot, these are big numbers - some thoughts
> 1) does your machine have 1000GB to spare for each java child thread (each
> mapper + each reducer)?  mapred.child.java.opts / -Xmx1048576m
> 2) does each of your daemons need / have 10G? HADOOP_HEAPSIZE=10000
> 
> hth
> EF
>>>>> # The maximum amount of heap to use, in MB. Default is 1000.
>>>>>  export HADOOP_HEAPSIZE=10000
>>>>> <name>mapred.child.java.opts</name>
>>>>> <value>-Xmx1048576m</value>
>>>>> </property>
>>>>> 
> 


Mime
View raw message