hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Configuring Hadoop daemon heap sizes
Date Mon, 01 Aug 2011 18:16:17 GMT
For specific max-heap sizes, you have to pass the value as a java vm
argument. See http://avricot.com/blog/index.php?post/2010/05/03/Get-started-with-java-JVM-memory-(heap%2C-stack%2C-xss-xms-xmx-xmn...)
for a good view on things with JVM and memory.

An example for specific heap size options to JVM:

# 1024 MB for DN
HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -Xmx1024m"
# 4 GB for NN
HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -Xmx4g"

On Mon, Aug 1, 2011 at 11:41 PM, Kai Ju Liu <kaiju@tellapart.com> wrote:
> Hi. I'm trying to tweak heap sizes for the Hadoop daemons, i.e.
> namenode/datanode and jobtracker/tasktracker. I've tried setting
> HADOOP_NAMENODE_HEAPSIZE, HADOOP_DATANODE_HEAPSIZE, and so on in
> hadoop-env.sh, but the heap size remains at the default of 1,000MB.
>
> In the cluster setup documentation, I see references to setting
> HADOOP_NAMENODE_OPTS, HADOOP_DATANODE_OPTS, and so on in hadoop-env.sh. Is
> this the proper way to set heap sizes, and if so, what is the proper syntax
> within the OPTS values? Thanks!
>
> Kai Ju Liu
>



-- 
Harsh J

Mime
View raw message