hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peter W. <pe...@marketingbrokers.com>
Subject Re: How much RAMs needed...
Date Mon, 16 Jul 2007 03:20:26 GMT
Trung,

Someone more knowledgeable will need to help.

It's my very simple understanding that Hadoop DFS
creates multiple blocks for every file being processed.

The JVM heap being exceeded could possibly be a file
handle issue instead of being due to overall block count.

In other words, your namenode should be able to
start up and handle far more input records if fewer
and bigger files on the datanodes are used.

Best,

Peter W.


On Jul 15, 2007, at 7:07 PM, Nguyen Kien Trung wrote:

> Hi Peter,
>
> I appreciate for the info. I'm afraid I'm not getting what you mean.
> The issue I've encountered is i'm not able to start up the namenode  
> due to out of memory error. Given that there are huge number of  
> tiny files in datanodes.

Mime
View raw message