hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sam rash (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1218) 20 append: Blocks recovered on startup should be treated with lower priority during block synchronization
Date Tue, 22 Jun 2010 17:00:01 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12881249#action_12881249
] 

sam rash commented on HDFS-1218:
--------------------------------

I racked my brain and can't come up with a case that this could actually occur--keepLength
is only set true when doing an append.  If any nodes had gone down and come back up (RWR),
they either have an old genstamp and will be ignored, or soft lease expiry recovery is initiated
by the NN with keepLength = false first.

i think the idea + patch look good to me
(and thanks for taking the time to explain it)


> 20 append: Blocks recovered on startup should be treated with lower priority during block
synchronization
> ---------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1218
>                 URL: https://issues.apache.org/jira/browse/HDFS-1218
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.20-append
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>            Priority: Critical
>             Fix For: 0.20-append
>
>         Attachments: hdfs-1281.txt
>
>
> When a datanode experiences power loss, it can come back up with truncated replicas (due
to local FS journal replay). Those replicas should not be allowed to truncate the block during
block synchronization if there are other replicas from DNs that have _not_ restarted.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message