hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhe Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order
Date Thu, 08 Sep 2016 00:01:45 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15472194#comment-15472194

Zhe Zhang commented on HDFS-10301:

Some more background about {{TestAddOverReplicatedStripedBlocks}}. We developed the EC feature
starting from NameNode. To test NameNode EC logic without the client ready, we added several
test methods to emulate blocks such as {{createStripedFile}} and {{addBlockToFile}}. In this
case, those "fake" block reports confused the NN.

In this particular test, the below sequence happens:
# Client creates file on NameNode
# Client adds blocks to the file on NameNode without really creating the blocks on DN
# DN sends "fake" block reports to NN, with randomly generated storage IDs.
      DatanodeStorage storage = new DatanodeStorage(UUID.randomUUID().toString());
      StorageReceivedDeletedBlocks[] reports = DFSTestUtil
              ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK, storage);
      for (StorageReceivedDeletedBlocks report : reports) {
        ns.processIncrementalBlockReport(dn.getDatanodeId(), report);
# The above code (unintentionally) triggers the zombie storage logic because those randomly
generated storages will not be in the next real BR.
# We inject real blocks onto the DNs. But out of 9 blocks in the group, we only injected 8.
So when NN receives block report {{cluster.triggerBlockReports();}} at L257, it should delete
internal block #8, which was reported in the "fake" BR but not in the real BR. The log for
that is:
[Block report processor] WARN  blockmanagement.BlockManager (BlockManager.java:removeZombieReplicas(2282))
- processReport 0xf79050ce694c3bfa: removed 1 replicas from storage 6c834645-8aec-48f2-ace8-122344e07e96,
which no longer exists on the DataNode.
{{6c834645-8aec-48f2-ace8-122344e07e96}} is one of the randomly generated storages.

I haven't fully understood how the above caused the test to fail. Hope it helps.

> BlockReport retransmissions may lead to storages falsely being declared zombie if storage
report processing happens out of order
> --------------------------------------------------------------------------------------------------------------------------------
>                 Key: HDFS-10301
>                 URL: https://issues.apache.org/jira/browse/HDFS-10301
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.1
>            Reporter: Konstantin Shvachko
>            Assignee: Vinitha Reddy Gankidi
>            Priority: Critical
>             Fix For: 2.7.4
>         Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, HDFS-10301.004.patch,
HDFS-10301.005.patch, HDFS-10301.006.patch, HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch,
HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, HDFS-10301.012.patch, HDFS-10301.013.patch,
HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf
> When NameNode is busy a DataNode can timeout sending a block report. Then it sends the
block report again. Then NameNode while process these two reports at the same time can interleave
processing storages from different reports. This screws up the blockReportId field, which
makes NameNode think that some storages are zombie. Replicas from zombie storages are immediately
removed, causing missing blocks.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message