hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order
Date Sat, 23 Apr 2016 01:08:13 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15254994#comment-15254994
] 

Konstantin Shvachko commented on HDFS-10301:
--------------------------------------------

Colin I ran your unit test and verified that it fails on current code base, but succeeds with
your patch.
Looking at the patch. Counting {{storagesSeen}} does work for your test case. But it is somewhat
confusing, as the count is changing with interleaving reports.
Suppose you have 3 storages (s1, s2, s3) and two block reports br1, br2 interleaving in the
following way:
|| reoprtId-strorage || storagesSeen ||
| br1-s1 | 0 |
| br1-s2 | 1 |
| br2-s1 | 0 |
| br2-s2 | 1 |
| br1-s3 | 0 |
The last line is confusing, because it should have been 2, but its is 0 since br2 overridden
{{lastBlockReportId}} for s1 and s2 .
This brought me to an idea. BR ids are monotonically increasing. What if in {{BlockManager.processReport()}}
(before processing but under the lock) we check {{lastBlockReportId}} for all storages, and
if we see one greater than {{context.getReportId()}} we through an {{IOExcpetion}} indicating
that the next block report is in progress and we do not need to continue with this one. The
exception is not expected to be passed back to the DataNode, as it has already timed out,
but even if it gets passed, the DataNode will just send another block report.
I think this could be a simple fix for this jira, and we can discuss other approaches to zombie
storage detection in the next issue. Current approach seems to be error prone. One way is
to go with the retry cache as [~jingzhao] suggested. Or there could be other ideas.


> BlockReport retransmissions may lead to storages falsely being declared zombie if storage
report processing happens out of order
> --------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10301
>                 URL: https://issues.apache.org/jira/browse/HDFS-10301
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.1
>            Reporter: Konstantin Shvachko
>            Assignee: Colin Patrick McCabe
>            Priority: Critical
>         Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, HDFS-10301.01.patch,
zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it sends the
block report again. Then NameNode while process these two reports at the same time can interleave
processing storages from different reports. This screws up the blockReportId field, which
makes NameNode think that some storages are zombie. Replicas from zombie storages are immediately
removed, causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message