Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E078A200B88 for ; Thu, 8 Sep 2016 02:01:47 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id DF15F160AC1; Thu, 8 Sep 2016 00:01:47 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 320EE160ACF for ; Thu, 8 Sep 2016 02:01:47 +0200 (CEST) Received: (qmail 73309 invoked by uid 500); 8 Sep 2016 00:01:46 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 73275 invoked by uid 99); 8 Sep 2016 00:01:46 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Sep 2016 00:01:46 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id ED4A12C1B80 for ; Thu, 8 Sep 2016 00:01:45 +0000 (UTC) Date: Thu, 8 Sep 2016 00:01:45 +0000 (UTC) From: "Zhe Zhang (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 08 Sep 2016 00:01:48 -0000 [ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15472194#comment-15472194 ] Zhe Zhang commented on HDFS-10301: ---------------------------------- Some more background about {{TestAddOverReplicatedStripedBlocks}}. We developed the EC feature starting from NameNode. To test NameNode EC logic without the client ready, we added several test methods to emulate blocks such as {{createStripedFile}} and {{addBlockToFile}}. In this case, those "fake" block reports confused the NN. In this particular test, the below sequence happens: # Client creates file on NameNode # Client adds blocks to the file on NameNode without really creating the blocks on DN # DN sends "fake" block reports to NN, with randomly generated storage IDs. {code} DatanodeStorage storage = new DatanodeStorage(UUID.randomUUID().toString()); StorageReceivedDeletedBlocks[] reports = DFSTestUtil .makeReportForReceivedBlock(block, ReceivedDeletedBlockInfo.BlockStatus.RECEIVED_BLOCK, storage); for (StorageReceivedDeletedBlocks report : reports) { ns.processIncrementalBlockReport(dn.getDatanodeId(), report); } {code} # The above code (unintentionally) triggers the zombie storage logic because those randomly generated storages will not be in the next real BR. # We inject real blocks onto the DNs. But out of 9 blocks in the group, we only injected 8. So when NN receives block report {{cluster.triggerBlockReports();}} at L257, it should delete internal block #8, which was reported in the "fake" BR but not in the real BR. The log for that is: {code} [Block report processor] WARN blockmanagement.BlockManager (BlockManager.java:removeZombieReplicas(2282)) - processReport 0xf79050ce694c3bfa: removed 1 replicas from storage 6c834645-8aec-48f2-ace8-122344e07e96, which no longer exists on the DataNode. {code} {{6c834645-8aec-48f2-ace8-122344e07e96}} is one of the randomly generated storages. I haven't fully understood how the above caused the test to fail. Hope it helps. > BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order > -------------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-10301 > URL: https://issues.apache.org/jira/browse/HDFS-10301 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.6.1 > Reporter: Konstantin Shvachko > Assignee: Vinitha Reddy Gankidi > Priority: Critical > Fix For: 2.7.4 > > Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf > > > When NameNode is busy a DataNode can timeout sending a block report. Then it sends the block report again. Then NameNode while process these two reports at the same time can interleave processing storages from different reports. This screws up the blockReportId field, which makes NameNode think that some storages are zombie. Replicas from zombie storages are immediately removed, causing missing blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org