hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sudhakara st <sudhakara...@gmail.com>
Subject Re: namenode memory test
Date Sat, 27 Apr 2013 18:59:47 GMT
Every file, directory and block in HDFS is represented as an object in the
namenode’s memory, Namenode consume about average of 150 bytes per each
block(object).


On Wed, Apr 24, 2013 at 12:30 PM, Mahesh Balija
<balijamahesh.mca@gmail.com>wrote:

> Can you manually go into the directory configured for hadoop.tmp.dir under
> core-site.xml and do an ls -l to find the disk usage details, it will have
> fsimage, edits, fstime, VERSION.
> or the basic commands like,
> hadoop fs -du
> hadoop fsck
>
>
>
> On Wed, Apr 24, 2013 at 7:56 AM, 自己 <zx4866123@163.com> wrote:
>
>> Hi, I would like to know  how much memory our data take on the name-node
>> per block, file and directory.
>> For example, the metadata size of a file.
>> When I store some files in HDFS,how can I get the memory size take on
>> the name-node?
>> Is there some tools or commands to test the memory size take on the
>> name-node?
>>
>> I'm looking forward to your reply! Thanks!
>>
>>
>>
>


-- 

Regards,
.....  Sudhakara.st

Mime
View raw message