hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinayakumar B (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12120) Use new block for pre-RollingUpgrade files' append requests
Date Fri, 21 Jul 2017 05:34:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16095805#comment-16095805

Vinayakumar B commented on HDFS-12120:

bq. The variable length block feature is new and has not been fully field tested. Making an
existing popular feature depend on it has a risk. We need to think about how the risk can
be mitigated. E.g. provide a way to out-out in case it creates issues.
Yes, I understand. After going through the HDFS-3689 discussions, this feature should be adopted
explicitly by the writers to avoid any unexpected surprises(performance/functionality).
So making this decision of going to new block after rolling upgrade might be a problematic
if client expects the next data to be in same block.
Any other possible ways you could think off? 
All the remaining ways I could think of will have a problem with time gap of RU start time
in Namenode and Datanode knowing that RU started in next heartbeat. Appends done within this
time, could not handle the changes if Datanode alone doing the backup of previous data.

> Use new block for pre-RollingUpgrade files' append requests
> -----------------------------------------------------------
>                 Key: HDFS-12120
>                 URL: https://issues.apache.org/jira/browse/HDFS-12120
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Vinayakumar B
>            Assignee: Vinayakumar B
>         Attachments: HDFS-12120-01.patch
> After the RollingUpgrade prepare, append on pre-RU files will re-open the same last block
and makes changes to it (appending extra data, changing genstamp etc).
> These changes to the block will not be tracked in Datanodes (either in trash or via hardlinks)
> This creates problem if RollingUpgrade.Rollback is called.
> Since block state and size both changed, after rollback block will be marked corrupted.
> To avoid this, first time append on pre-RU files can be forced to write to new block

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message