hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Increasing Java Heap Space in Slave Nodes
Date Sat, 07 Sep 2013 09:54:14 GMT
You can pass that config set as part of your job (jobConf.set(…) or
job.getConfiguration().set(…)). Alternatively, if you implement Tool,
and use its grabbed Configruation, you can also pass it via
-Dname=value argument when running the job (the option has to precede
any custom options).

On Sat, Sep 7, 2013 at 2:06 AM, Arko Provo Mukherjee
<arkoprovomukherjee@gmail.com> wrote:
> Hello All,
> I am running my job on a Hadoop Cluster and it fails due to insufficient
> Java Heap Memory.
> I searched in google, and found that I need to add the following into the
> conf files:
>   <property>
>     <name>mapred.child.java.opts</name>
>     <value>-Xmx2000m</value>
>   </property>
> However, I don't want to request the administrator to change settings as it
> is a long process.
> Is there a way I can ask Hadoop to use more Heap Space in the Slave nodes
> without changing the conf files via some command line parameter?
> Thanks & regards
> Arko

Harsh J

View raw message