hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "shangan" <shan...@corp.kaixin001.com>
Subject namenode consume quite a lot of memory with only serveral hundreds of files in it
Date Mon, 06 Sep 2010 07:27:57 GMT
my cluster consists of 8 nodes with the namenode in an independent machine,the following info
is what I get from namenode web ui:
291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 4.34 GB (67%)

I'm wondering why the namenode take so much memory while I only store hundreds of files. I've
check the fsimage and edits files, the size of the sum of both is only 232 KB. So far as I
know namenode can store the information of millions of files with 1G RAM, why my cluster consume
so much memory ? If it goes on,I can't store that many files before the memory is eaten up.

2010-09-06 



shangan 

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message