hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sameer Paranjpye <same...@yahoo-inc.com>
Subject Re: Max number of files in HDFS?
Date Tue, 28 Aug 2007 07:50:52 GMT
How much memory does your Namenode machine have?

You should look at the number of files, directories and blocks on your 
installation. All these numbers are available via NamenodeFsck.Result

HADOOP-1687 (http://issues.apache.org/jira/browse/HADOOP-1687) has a 
detailed discussion of the amount of memory used by Namenode data 
structures.

Sameer

Taeho Kang wrote:
> Dear All,
> 
> Hi, my name is Taeho and I am trying to figure out the maximum number of
> files a namenode can hold.
> The main reason for doing this is that I want to have some estimates on how
> many files I can put into the HDFS without overflowing the Namenode
> machine's memory.
> 
> I know the number depends on the size of memory and how much is allocated
> for the running JVM.
> For the memory usage by the namenode, I can simply use Runtime object of
> JDK.
> For the total number of files residing in the DFS, I am thinking of using
> getTotailfiles() funcion of NamenodeFsck class in
> org.apache.hadoop.dfspacakge. Am I correct here in using NamenodeFsck?
> 
> Or, has anybody done similar experiments?
> 
> Any comments/suggestions will be appreciated.
> Thanks in advance.
> Best Regards,
> 

Mime
View raw message