hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pratyush Banerjee <pratyushbaner...@aol.com>
Subject NameNode does not come out of Safemode automatically in Hadoop-0.17.2
Date Fri, 14 Nov 2008 04:39:20 GMT
Hi All,

We have been using hadoop-0.17.2 for some time now and we just had a 
case of namenode crash due to disk being full.
In order to get the namenode up again with minimal loss of data, we had 
to manually edit the edits file in a Hex editor and restart the namenode.

However after restarting, the namenode went to the safe mode (as 
expected), but it has been hours since it is like that, and it has not 
yet come out of the  safemode.
We can obviously force it to come out but should it not come out 
automatically ?
Even after 12 hours of remaining in safemode the ratio of reported block 
size is still stuck at  0.9768.

Running fsck on / in the hdfs does report about some corrupt files.
What is the  issue which is blocking namenode form coming out of 
safemode ? If we have to do it manually (hadoop dfsadmin -safemode 
leave) then what procedure do we follow in the process to ensure data 
safety ?

thanks and regards,


View raw message