hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reyane Oukpedjo <oukped...@gmail.com>
Subject Re: Hadoop Jobtracker heap size calculation and OOME
Date Fri, 11 Oct 2013 23:33:46 GMT
Hi there,
I had a similar issue with hadoop-1.2.0  JobTracker keep crashing until I
set HADOOP_HEAPSIZE="2048"  I did not have this kind of issue with previous
versions. But you can try this if you have memory and see. In my case the
issue was gone after I set as above.

Thanks


Reyane OUKPEDJO


On 11 October 2013 13:08, Viswanathan J <jayamviswanathan@gmail.com> wrote:

> Hi,
>
> I'm running a 14 nodes of Hadoop cluster with datanodes,tasktrackers
> running in all nodes.
>
> *Apache Hadoop :* 1.2.1
>
> It shows the heap size currently as follows:
>
> *Cluster Summary (Heap Size is 5.7/8.89 GB)*
> *
> *
> In the above summary what is the *8.89* GB defines? Is the *8.89* defines
> maximum heap size for Jobtracker, if yes how it has been calculated.
>
> Hope *5.7* is currently running jobs heap-size, how it is calculated.
>
> Have set the jobtracker default memory size in hadoop-env.sh
>
> *HADOOP_HEAPSIZE="1024"*
> *
> *
> Have set the mapred.child.java.opts value in mapred-site.xml as,
>
> <property>
>   <name>mapred.child.java.opts</name>
>   <value>-Xmx2048m</value>
> </property>
>
> Even after setting the above property, getting Jobtracker OOME issue. How
> the jobtracker memory gradually increasing. After restart the JT, within a
> week getting OOME.
>
> How to resolve this, it is in production and critical? Please help. Thanks
> in advance.
>
> --
> Regards,
> Viswa.J
>

Mime
View raw message