hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1103) Replica recovery doesn't distinguish between flushed-but-corrupted last chunk and unflushed last chunk
Date Fri, 14 May 2010 18:08:44 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12867609#action_12867609
] 

Hairong Kuang commented on HDFS-1103:
-------------------------------------

We should also exclude those RBWs that were failed on disk errors from lease recovery if there
are good ones available.

> Replica recovery doesn't distinguish between flushed-but-corrupted last chunk and unflushed
last chunk
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1103
>                 URL: https://issues.apache.org/jira/browse/HDFS-1103
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-1103-test.txt
>
>
> When the DN creates a replica under recovery, it calls validateIntegrity, which truncates
the last checksum chunk off of a replica if it is found to be invalid. Then when the block
recovery process happens, this shortened block wins over a longer replica from another node
where there was no corruption. Thus, if just one of the DNs has an invalid last checksum chunk,
data that has been sync()ed to other datanodes can be lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message