hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3677) Problems with generation stamp upgrade
Date Thu, 10 Jul 2008 02:33:31 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12612355#action_12612355
] 

Konstantin Shvachko commented on HADOOP-3677:
---------------------------------------------

May be a good solution would be to covert the distributed upgrade into local data-node upgrade.
It will solve both of the problems above, plus eliminate the warning message reported in HADOOP-3732.
The only disadvantage of this approach I can see is that data-nodes will take rather long
time to startup, around 5 minutes each on a large cluster. 
But this can be solved by including reasonable messages about the upgrade progress.

> Problems with generation stamp upgrade
> --------------------------------------
>
>                 Key: HADOOP-3677
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3677
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.18.0
>
>
> # The generation stamp upgrade renames blocks' meta-files so that the name contains the
block generation stamp as stated in HADOOP-2656.
> If a data-node has blocks that do not belong to any files and the name-node asks the
data-node to remove those blocks 
> during or before the upgrade started the data-node will remove the blocks but not the
meta-files because their names 
> are still in the old format which is not recognized by the new code. So we can end up
with a number of garbage files which
> will be hard to recognize that they are unused and the system will never remove them
automatically.
> I think this should be handled by the upgrade code in the end, but may be it will be
right to fix HADOOP-3002 for the 0.18 release,
> which will avoid scheduling block removal when the name-node is in safe-mode.
> # I was not able to get the upgrade -force option to work. This option lets the name-node
proceed with a distributed upgrade even if
> the data-nodes are not able to complete their local upgrades. Did we test this feature
at all for the generation stamp upgrade?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message