hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Konstantin Shvachko <...@yahoo-inc.com>
Subject Re: Safe mode is ON
Date Wed, 11 Jul 2007 02:55:44 GMT
You can always manually leave safe mode.
bin/hadoop dfsadmin -safemode leave
And then set replication to whatever or even remove unwanted files that 
have bad blocks.


Nguyen Kien Trung wrote:

> Thanks Konstantin. I understand now
>
> " At startup the name node accepts data node reports collecting 
> information about block locations. In order to leave safe mode it 
> needs to collect a configurable percentage called threshold of blocks, 
> which satisfy the minimal replication condition. The minimal 
> replication condition is that each block must have at least 
> dfs.replication.min replicas. When the threshold is reached the name 
> node extends safe mode for a configurable amount of time to let the 
> remaining data nodes to check in before it will start replicating 
> missing blocks. Then the name node leaves safe mode."
>
> I realized that lots of blocks are missing their replicas. And that 
> turns into the situation of SAFE mode.
> Does it make sense to allow the command ./hadoop dfs -setrep to work 
> even the name node is in safe mode? Otherwise the name node is idle 
> forever
>
> Konstantin Shvachko wrote:
>
>> You can run "hadoop fsck / " to see how many blocks are missing on 
>> your cluster.
>> See definition of safe mode here.
>> http://lucene.apache.org/hadoop/api/org/apache/hadoop/dfs/NameNode.html#setSafeMode(org.apache.hadoop.dfs.FSConstants.SafeModeAction)

>>
>>
>> --Konstantin
>>
>> erolagnab wrote:
>>
>>> Hi all,
>>>
>>> Just wondering what is the reason causing NameNode is on SafeMode 
>>> forever?
>>> I've left my machine running for 2 days and it's still on Safe Mode.
>>>
>>> Trung
>>>  
>>>
>>
>>
>
>


Mime
View raw message