hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-6981) DN upgrade with layout version change should not use trash
Date Fri, 05 Sep 2014 19:13:28 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14123413#comment-14123413
] 

Arpit Agarwal edited comment on HDFS-6981 at 9/5/14 7:12 PM:
-------------------------------------------------------------

Lacking an explicit finalize command for rolling upgrade, it is hard for the DN to determine
when to delete 'previous'. Rolling upgrade is signaled by the presence/absence of RollingUpgradeStatus
in the heartbeat response.

Without modifying the NN, one solution is that the DN creates a marker file when rolling upgrade
is signaled by NN. When rolling upgrade is no longer signaled by NN, 'previous' is cleaned
up only if the marker file is present. Else a regular upgrade is in progress and 'previous'
is left alone.

I am wary of making NN changes, the interaction with HA is complex enough as it is. 


was (Author: arpitagarwal):
Lacking an explicit finalize command for rolling upgrade, it is hard for the DN to determine
when to delete 'previous'. Rolling upgrade is signaled by the presence/absence of RollingUpgradeInfo
in the heartbeat response.

Without modifying the NN, one solution is that the DN creates a marker file when rolling upgrade
is signaled by NN. When rolling upgrade is no longer signaled by NN, 'previous' is cleaned
up only if the marker file is present. Else a regular upgrade is in progress and 'previous'
is left alone.

I am wary of making NN changes, the interaction with HA is complex enough as it is. 

> DN upgrade with layout version change should not use trash
> ----------------------------------------------------------
>
>                 Key: HDFS-6981
>                 URL: https://issues.apache.org/jira/browse/HDFS-6981
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 3.0.0
>            Reporter: James Thomas
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-6981.01.patch, HDFS-6981.02.patch, HDFS-6981.03.patch, HDFS-6981.04.patch
>
>
> Post HDFS-6800, we can encounter the following scenario:
> # We start with DN software version -55 and initiate a rolling upgrade to version -56
> # We delete some blocks, and they are moved to trash
> # We roll back to DN software version -55 using the -rollback flag – since we are running
the old code (prior to this patch), we will restore the previous directory but will not delete
the trash
> # We append to some of the blocks that were deleted in step 2
> # We then restart a DN that contains blocks that were appended to – since the trash
still exists, it will be restored at this point, the appended-to blocks will be overwritten,
and we will lose the appended data
> So I think we need to avoid writing anything to the trash directory if we have a previous
directory.
> Thanks to [~james.thomas] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message