hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-12070) Failed block recovery leaves files open indefinitely and at risk for data loss
Date Fri, 15 Dec 2017 18:00:08 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292929#comment-16292929
] 

Kihwal Lee edited comment on HDFS-12070 at 12/15/17 5:59 PM:
-------------------------------------------------------------

bq. the PD needs to ... tell the namenode to exclude the failed node from the expected locations.

It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} might do the
trick. On the NN side, we could make it do block/lease recovery again soon. The older NNs
will still work, but with 1 hour delay until the retry.   


was (Author: kihwal):
bq. the PD needs to ... tell the namenode to exclude the failed node from the expected locations.

It appears calling {{commitBlockSynchronization()}} with {{closeFile == false}} might do the
trick. On the NN size, we could make it do block/lease recovery again soon. The older NNs
will still work, but with 1 hour delay until the retry.   

> Failed block recovery leaves files open indefinitely and at risk for data loss
> ------------------------------------------------------------------------------
>
>                 Key: HDFS-12070
>                 URL: https://issues.apache.org/jira/browse/HDFS-12070
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.0.0-alpha
>            Reporter: Daryn Sharp
>
> Files will remain open indefinitely if block recovery fails which creates a high risk
of data loss.  The replication monitor will not replicate these blocks.
> The NN provides the primary node a list of candidate nodes for recovery which involves
a 2-stage process. The primary node removes any candidates that cannot init replica recovery
(essentially alive and knows about the block) to create a sync list.  Stage 2 issues updates
to the sync list – _but fails if any node fails_ unlike the first stage.  The NN should
be informed of nodes that did succeed.
> Manual recovery will also fail until the problematic node is temporarily stopped so a
connection refused will induce the bad node to be pruned from the candidates.  Recovery succeeds,
the lease is released, under replication is fixed, and block is invalidated from the bad node.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message