hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From karunakar <lkarunaka...@gmail.com>
Subject Re: HBase vs Hadoop memory configuration.
Date Tue, 29 Jan 2013 01:46:19 GMT
Hi Jean,

AFAIK !!

The namenode can handle 1 million blocks for 1GB of namenode heap size ! It
depends on the configuration
dfs.block.size*1 milion blocks = 128 TB of data [considering 128 MB is the
default block size].

Using this command :export HADOOP_HEAPSIZE="-Xmx2g" will change across all
the daemons. Rather than using that, use the below configurations for
individual daemons. 

You can set the namenode, datanode, jobtracker, tasktracker 2 gb heap size
for each daemon by using the following lines in hadoop-env.sh: Example

export HADOOP_NAMENODE_OPTS="-Xmx2g"
export HADOOP_DATANODE_OPTS="-Xmx2g"
export HADOOP_JOBTRACKER_OPTS="-Xmx2g"
export HADOOP_TASKTRACKER_OPTS="-Xmx2g"

Ex: If you have a server of 16 GB and concentrating more on HBase, and if
you are running datanode, tasktracker and regionserver on one node: then
give 4 GB for datanode, 2-3 GB for tasktracker [setting child jvm's] and 6-8
GB for regionserver. 

Thanks,
karunakar.






--
View this message in context: http://apache-hbase.679495.n3.nabble.com/HBase-vs-Hadoop-memory-configuration-tp4037436p4037573.html
Sent from the HBase User mailing list archive at Nabble.com.

Mime
View raw message