hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: namenode consume quite a lot of memory with only serveral hundreds of files in it
Date Mon, 06 Sep 2010 10:15:59 GMT
On 06/09/10 08:27, shangan wrote:
> my cluster consists of 8 nodes with the namenode in an independent machine,the following
info is what I get from namenode web ui:
> 291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 4.34 GB (67%)
> I'm wondering why the namenode take so much memory while I only store hundreds of files.
I've check the fsimage and edits files, the size of the sum of both is only 232 KB. So far
as I know namenode can store the information of millions of files with 1G RAM, why my cluster
consume so much memory ? If it goes on,I can't store that many files before the memory is
eaten up.
>
It might just been there isn't enough memory consumption on your 
pre-allocated heap to trigger GC yet; have a play with the GC tooling 
and jvisualvm to see what's going on.

Mime
View raw message