hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sam liu <samliuhad...@gmail.com>
Subject The minimum memory requirements to datanode and namenode?
Date Mon, 13 May 2013 02:28:56 GMT

I setup a cluster with 3 nodes, and after that I did not submit any job on
it. But, after few days, I found the cluster is unhealthy:
- No result returned after issuing command 'hadoop dfs -ls /' or 'hadoop
dfsadmin -report' for a while
- The page of 'http://namenode:50070' could not be opened as expected...
- ...

I did not find any usefull info in the logs, but found the avaible memory
of the cluster nodes are very low at that time:
- node1(NN,JT,DN,TT): 158 mb mem is available
- node2(DN,TT): 75 mb mem is available
- node3(DN,TT): 174 mb mem is available

I guess the issue of my cluster is caused by lacking of memeory, and my
questions are:
- Without running jobs, what's the minimum memory requirements to datanode
and namenode?
- How to define the minimum memeory for datanode and namenode?


Sam Liu

View raw message