hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-8881) Erasure Coding: internal blocks got missed and got over-replicated at the same time
Date Fri, 01 Sep 2017 00:41:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Wang updated HDFS-8881:
------------------------------
    Issue Type: Bug  (was: Sub-task)
        Parent:     (was: HDFS-8031)

> Erasure Coding: internal blocks got missed and got over-replicated at the same time
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-8881
>                 URL: https://issues.apache.org/jira/browse/HDFS-8881
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>            Reporter: Walter Su
>            Assignee: Walter Su
>         Attachments: HDFS-8881.00.patch
>
>
> We know the Repl checking depends on {{BlockManager#countNodes()}}, but countNodes()
has limitation for striped blockGroup.
> *One* missing internal block will be catched by Repl checking, and handled by ReplicationMonitor.
> *One* over-replicated internal block will be catched by Repl checking, and handled by
processOverReplicatedBlocks.
> *One* missing internal block and *two* over-replicated internal blocks *at the same time*
will be catched by Repl checking, and handled by processOverReplicatedBlocks, later by ReplicationMonitor.
> *One* missing internal block and *One* over-replicated internal block *at the same time*
will *NOT* be catched by Repl checking.
> "at the same time" means one missing internal block can't be recovered, and one internal
block got over-replicated anyway. For example:
> scenario A:
> step 1. block #0 and #1 are reported missing.
> 2. a new #1 got recovered.
> 3. the old #1 come back, and the recovery work for #0 failed.
> scenario B:
> 1. An DN decommissioned/dead which has #1.
> 2. block #0 is reported missing.
> 3. The DN has #1 recommisioned, and the recovery work for #0 failed.
> In the end, the blockGroup has \[1, 1, 2, 3, 4, 5, 6, 7, 8\], assume 6+3 schema. Client
always needs to decode #0 if the blockGroup doesn't get handled.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message