hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Re: DN limit
Date Sat, 24 Dec 2011 05:39:28 GMT
Bourne,

You have 14 million files, each taking up a single block or are these
files multi-blocked? What does the block count come up as in the live
nodes list of the NN web UI?

2011/12/23 bourne1900 <bourne1900@yahoo.cn>:
> Sorry, a detailed description:
> I wanna know how many files a datanode can hold, so there is only one datanode in my
cluster.
> When the datanode save 14million files, the cluster can't work, and the datanode has
used all of it's MEM(32G), the namenode's MEM is OK.
>
>
>
>
> Bourne
>
> Sender: Adrian Liu
> Date: 2011年12月23日(星期五) 上午10:47
> To: common-user@hadoop.apache.org
> Subject: Re: DN limit
> In my understanding, the max number of files stored in the HDFS should be <MEM of
namenode>/sizeof(inode struct).   This max number of HDFS files should be no smaller than
max files a datanode can hold.
>
> Please feel free to correct me because I'm just beginning learning hadoop.
>
> On Dec 23, 2011, at 10:35 AM, bourne1900 wrote:
>
>> Hi all,
>> How many files a datanode can hold?
>> In my test case, when a datanode save 14million files, the cluster can't work.
>>
>>
>>
>>
>> Bourne
>
> Adrian Liu
> adrianl@yahoo-inc.com



-- 
Harsh J

Mime
View raw message