hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HDFS-1103) Replica recovery doesn't distinguish between flushed-but-corrupted last chunk and unflushed last chunk
Date Tue, 11 May 2010 00:31:33 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Todd Lipcon updated HDFS-1103:
------------------------------

    Priority: Blocker  (was: Major)

I think this may be a blocker if proper sync is a blocking feature. The testAppendSyncChecksum
tests in HDFS-1139 also display a very similar problem where synced data gets truncated, not
entirely certain if it's exactly the same issue.

> Replica recovery doesn't distinguish between flushed-but-corrupted last chunk and unflushed
last chunk
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1103
>                 URL: https://issues.apache.org/jira/browse/HDFS-1103
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Priority: Blocker
>         Attachments: hdfs-1103-test.txt
>
>
> When the DN creates a replica under recovery, it calls validateIntegrity, which truncates
the last checksum chunk off of a replica if it is found to be invalid. Then when the block
recovery process happens, this shortened block wins over a longer replica from another node
where there was no corruption. Thus, if just one of the DNs has an invalid last checksum chunk,
data that has been sync()ed to other datanodes can be lost.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message