hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order
Date Fri, 17 Jun 2016 00:44:05 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15335100#comment-15335100
] 

Colin Patrick McCabe commented on HDFS-10301:
---------------------------------------------

{code}
+        if (context.getTotalRpcs() == context.getCurRpc() + 1) {
+          long leaseId = this.getBlockReportLeaseManager().removeLease(node);
+          BlockManagerFaultInjector.getInstance().
+              removeBlockReportLease(node, leaseId);
         }
+        LOG.debug("Processing RPC with index " + context.getCurRpc()
+            + " out of total " + context.getTotalRpcs() + " RPCs in "
+            + "processReport 0x" +
+            Long.toHexString(context.getReportId()));
       }
{code}
This won't work in the presence of reordered RPCs.  If the RPCs are reordered so that curRpc
1 arrives before curRpc 0, the lease will be removed and RPC 0 will be rejected.

{code}
    for (int r = 0; r < reports.length; r++) {
      final BlockListAsLongs blocks = reports[r].getBlocks();
      if (blocks != BlockListAsLongs.STORAGE_REPORT_ONLY) {
{code}
Using object equality to compare two {{BlockListAsLongs}} objects is very surprising to anyone
reading the code.  In general, I find the idea of overloading the block list to sometimes
not be a block list to be very weird and surprising.  If we are going to do it, it certainly
needs a lot of comments in the code to explain what's going on.  I think it would be clearer
and less error-prone just to add an optional list of storage ID strings in the {{.proto}}
file.

{code}
    if (nn.getFSImage().isUpgradeFinalized()) {
      Set<String> storageIDsInBlockReport = new HashSet<>();
      if (context.getTotalRpcs() == context.getCurRpc() + 1) {
        for (StorageBlockReport report : reports) {
          storageIDsInBlockReport.add(report.getStorage().getStorageID());
        }
        bm.removeZombieStorages(nodeReg, context, storageIDsInBlockReport);
      }
    }
{code}
This isn't going to work in the presence of reordered RPCs, is it?  If curRpc 1 appears before
curRpc 0, we'll never get into this clause at all and so zombies won't be removed.  Considering
you are so concerned that my patch didn't solve the interleaved and/or reordered RPC case,
this seems like something you should solve.  I also don't understand what the rationale for
ignoring zombies during an upgrade is.  Keep in mind zombie storages can lead to data loss
under some conditions (see HDFS-7960 for details).

> BlockReport retransmissions may lead to storages falsely being declared zombie if storage
report processing happens out of order
> --------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-10301
>                 URL: https://issues.apache.org/jira/browse/HDFS-10301
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.1
>            Reporter: Konstantin Shvachko
>            Assignee: Vinitha Reddy Gankidi
>            Priority: Critical
>         Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, HDFS-10301.004.patch,
HDFS-10301.005.patch, HDFS-10301.006.patch, HDFS-10301.01.patch, HDFS-10301.sample.patch,
zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it sends the
block report again. Then NameNode while process these two reports at the same time can interleave
processing storages from different reports. This screws up the blockReportId field, which
makes NameNode think that some storages are zombie. Replicas from zombie storages are immediately
removed, causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message