hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1103) Replica recovery doesn't distinguish between flushed-but-corrupted last chunk and unflushed last chunk
Date Tue, 11 May 2010 01:43:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12866018#action_12866018
] 

Todd Lipcon commented on HDFS-1103:
-----------------------------------

This is definitely the same issue -- I remember now that I opened this issue before when I
had started forward-porting those same tests :)

I'm not 100% sure what the right recourse is - should we in fact always recover the longest
valid replica for RWR/RUR cases, even if it means a lower replication count?

> Replica recovery doesn't distinguish between flushed-but-corrupted last chunk and unflushed
last chunk
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1103
>                 URL: https://issues.apache.org/jira/browse/HDFS-1103
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-1103-test.txt
>
>
> When the DN creates a replica under recovery, it calls validateIntegrity, which truncates
the last checksum chunk off of a replica if it is found to be invalid. Then when the block
recovery process happens, this shortened block wins over a longer replica from another node
where there was no corruption. Thus, if just one of the DNs has an invalid last checksum chunk,
data that has been sync()ed to other datanodes can be lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message