hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dhruba Borthakur" <dhr...@gmail.com>
Subject Re: Maximum number of files in hadoop
Date Mon, 09 Jun 2008 04:57:43 GMT
The maximum number of files in HDFS depends on the amount of memory
available for the namenode. Each file object and each block object
takes about  150 bytes of the memory. Thus, if you have 1million files
and each file has 1 one block each, then you would need about 3GB of
memory for the namenode.


On Fri, Jun 6, 2008 at 11:51 PM, karthik raman <karthik_2884@yahoo.com> wrote:
> Hi,
>    What is the maximum number of files that can be stored on HDFS? Is it dependent on
namenode memory configuration? Also does this impact on the performance of namenode anyway?
> thanks in advance
> Karthik
>      From Chandigarh to Chennai - find friends all over India. Go to http://in.promos.yahoo.com/groups/citygroups/

View raw message