hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fei Pan <cnwe...@gmail.com>
Subject how to take full advantage of large amount of memory ?
Date Tue, 03 May 2011 03:20:55 GMT
I am managing a cluster composed of 1 namenode and 8 datanodes (CDH3
hadoop-0.20.2);  they are all 16-core CPU and 32G-memory.

for the full usage of the large memory I configured as following:

 *HADOOP_HEAPSIZE = 1GB (hadoop-env.sh)*
* mapred.child.java.opts =  -Xmx1024m (mapred-site.xml)*
* io.sort.mb = 256M (mapred-site.xml)
*
I want to know *whether these configurations are suitable (should they be
configured smaller or bigger ?) for the dataNode machines*, and *is there
any other configurations which can take more advantage of the  large memory*
;

thank you for your reply in advance.


-- 
Stay Hungry. Stay Foolish.

Mime
  • Unnamed multipart/mixed (inline, None, 0 bytes)
View raw message