hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1752) "dfsadmin -upgradeProgress force" should leave safe mode in order to push the upgrade forward.
Date Wed, 22 Aug 2007 00:04:31 GMT
"dfsadmin -upgradeProgress force" should leave safe mode in order to push the upgrade forward.

                 Key: HADOOP-1752
                 URL: https://issues.apache.org/jira/browse/HADOOP-1752
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.14.0
            Reporter: Konstantin Shvachko
            Assignee: Raghu Angadi
             Fix For: 0.15.0

I have a cluster (created before hadoop 0.14) on which 40% of data-node blocks were lost.
I tried to upgrade it to 0.14.
The distributed upgrade was scheduled correctly on the name-node and all data-nodes. But it
never started, since
there was not enough blocks for the name-node to leave safe mode.
I first tried
bin/hadoop dfsadmin -safemode leave
But this is prohibited since the distributed upgrade is in progress. I tried 
bin/hadoop dfsadmin -upgradeProgress force
But these didn't work because the distributed upgrade does not start until the safe mode conditions
are met on the name-node.
The solution would be to set the safe-mode-threshold to 60% if of course I new exactly how
many blocks were missing.

The "force" command was designed as a way for an administrator to get the upgrade going even
if the cluster is not in the perfect shape.
This would let us save at least the data available rather than loosing everything.

I propose to modify the force command so that it would let the cluster start distributed upgrade
even if safe-mode is still on.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message