hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3677) Problems with generation stamp upgrade
Date Fri, 11 Jul 2008 17:50:32 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12612941#action_12612941

Raghu Angadi commented on HADOOP-3677:

> ... specify that "do not send block reports before distributed upgrade is complete".
Yes, we can fix it with more features like this. Still we will be left with thousands of warning
messages. Question is what do we do for this jira.

Whether a local upgrade is a hack I think is debatable. It makes logical sense to me : Datanode
metada file name format has changed between 0.17 and 0.18, so datanode converts these names
to new format when it is upgraded. 

In any case, a hack only the core developers need to know might be more desirable than a hack
in upgrade procedure that all admins need to be aware of.

If there consensus to convert metadata file name when datanode starts up, then I will submit
a patch. 

> Problems with generation stamp upgrade
> --------------------------------------
>                 Key: HADOOP-3677
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3677
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.18.0
> # The generation stamp upgrade renames blocks' meta-files so that the name contains the
block generation stamp as stated in HADOOP-2656.
> If a data-node has blocks that do not belong to any files and the name-node asks the
data-node to remove those blocks 
> during or before the upgrade started the data-node will remove the blocks but not the
meta-files because their names 
> are still in the old format which is not recognized by the new code. So we can end up
with a number of garbage files which
> will be hard to recognize that they are unused and the system will never remove them
> I think this should be handled by the upgrade code in the end, but may be it will be
right to fix HADOOP-3002 for the 0.18 release,
> which will avoid scheduling block removal when the name-node is in safe-mode.
> # I was not able to get the upgrade -force option to work. This option lets the name-node
proceed with a distributed upgrade even if
> the data-nodes are not able to complete their local upgrades. Did we test this feature
at all for the generation stamp upgrade?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message