hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <qwertyman...@gmail.com>
Subject Re: Re:HDFS start-up with safe mode?
Date Fri, 08 Apr 2011 17:58:01 GMT
Hello,

I'm not quite clear why you'd want to disable a consistency check such
as the safemode feature. It is to guarantee that your DFS is to be
made ready only after it has sufficient blocks reported to start
handling your dfs requests. If your NN ever goes into safemode later,
it is vital that you take a look at logs and fsck reports to determine
what's gone wrong.

On Fri, Apr 8, 2011 at 3:06 PM, springring <springring@126.com> wrote:
> I modify the value of "dfs.safemode.threshold.pct" to zero, now everything is ok.
> log file as below
> But there are still three questions
>
>  1.. Can I regain percentage of blocks that should satisfy the minimal replication requirement
>          to 99.9%?  hadoop balancer? For I feel it will be more safe.

The safemode is to guarantee that. That is why it is called the 'safe'
mode. Not sure what you mean by the balancer thing.

In production one never restarts the NameNode frequently, so I s'pose
this is just to get rid of some development hassles?

You may want to additionally lower the leave-safemode extension period
from 30s to 0s to get rid of the check entirely, anyway.

-- 
Harsh J
http://harshj.com

Mime
View raw message