hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bourne1900 <bourne1...@yahoo.cn>
Subject Re: Re: DN limit
Date Fri, 23 Dec 2011 03:06:49 GMT
Sorry, a detailed description:
I wanna know how many files a datanode can hold, so there is only one datanode in my cluster.
When the datanode save 14million files, the cluster can't work, and the datanode has used
all of it's MEM(32G), the namenode's MEM is OK.




Bourne

Sender: Adrian Liu
Date: 2011年12月23日(星期五) 上午10:47
To: common-user@hadoop.apache.org
Subject: Re: DN limit
In my understanding, the max number of files stored in the HDFS should be <MEM of namenode>/sizeof(inode
struct).   This max number of HDFS files should be no smaller than max files a datanode can
hold.

Please feel free to correct me because I'm just beginning learning hadoop.

On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:

> Hi all,
> How many files a datanode can hold?
> In my test case, when a datanode save 14million files, the cluster can't work.
> 
> 
> 
> 
> Bourne

Adrian Liu
adrianl@yahoo-inc.com
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message