hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5438) Flaws in block report processing can cause data loss
Date Mon, 28 Oct 2013 21:12:30 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13807225#comment-13807225
] 

Kihwal Lee commented on HDFS-5438:
----------------------------------

This is a high-level description of what the patch does.
* The patch makes NN save the list of already reported replicas when starting a pipeline recovery.
If a new report with the new gen stamp is not received for the existing replica until the
recovery is done, it will be marked corrupt.
* If a block report is received for existing corrupt replica and it is no longer corrupt,
NN will remove it from the corrupt replicas map.
* If client cannot close a file because the block does not have enough number of valid replicas,
it eventually gives up rather than hanging forever. It is already failing after a number of
retries when adding a new block.  It will use the same retry limit in compleFile(), but the
timeout will double every time to make it try harder. With the default of 5 retries, a client
will wait at least 4 minutes and give up. If NN is not responding, it may wait longer.

> Flaws in block report processing can cause data loss
> ----------------------------------------------------
>
>                 Key: HDFS-5438
>                 URL: https://issues.apache.org/jira/browse/HDFS-5438
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 0.23.9, 2.2.0
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>         Attachments: HDFS-5438.trunk.patch
>
>
> The incremental block reports from data nodes and block commits are asynchronous. This
becomes troublesome when the gen stamp for a block is changed during a write pipeline recovery.
> * If an incremental block report is delayed from a node but NN had enough replicas already,
a report with the old gen stamp may be received after block completion. This replica will
be correctly marked corrupt. But if the node had participated in the pipeline recovery, a
new (delayed) report with the correct gen stamp will come soon. However, this report won't
have any effect on the corrupt state of the replica.
> * If block reports are received while the block is still under construction (i.e. client's
call to make block committed has not been received by NN), they are blindly accepted regardless
of the gen stamp. If a failed node reports in with the old gen stamp while pipeline recovery
is on-going, it will be accepted and counted as valid during commit of the block.
> Due to the above two problems, correct replicas can be marked corrupt and corrupt replicas
can be accepted during commit.  So far we have observed two cases in production.
> * The client hangs forever to close a file. All replicas are marked corrupt.
> * After the successful close of a file, read fails. Corrupt replicas are accepted them
during commit and valid replicas are marked corrupt afterward.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message