Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 8FDB2200B8F for ; Thu, 15 Sep 2016 08:42:23 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 8EA41160AB5; Thu, 15 Sep 2016 06:42:23 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id DA741160AD7 for ; Thu, 15 Sep 2016 08:42:22 +0200 (CEST) Received: (qmail 94518 invoked by uid 500); 15 Sep 2016 06:42:21 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 94485 invoked by uid 99); 15 Sep 2016 06:42:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 15 Sep 2016 06:42:21 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 8AFEC2C1B8F for ; Thu, 15 Sep 2016 06:42:21 +0000 (UTC) Date: Thu, 15 Sep 2016 06:42:21 +0000 (UTC) From: "Konstantin Shvachko (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 15 Sep 2016 06:42:23 -0000 [ https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15492546#comment-15492546 ] Konstantin Shvachko commented on HDFS-10301: -------------------------------------------- Still not clear what scenario concerns you. Arpit, could you please clarify. * Balancer copies a replica from a source DN to a target DN and when finished sends IBR with the target as a new replica location and a hint to remove old replica from the source DN. If the source or the target storage fails during this the transfer fails and Balancer moves on. If either of the storages fail after the transfer it is the same as the regular failure, the block will become under-replicated and recovered in due time. * For VolumeChoosingPolicy it is even more important to know early which storages failed in order to avoid choosing them as targets. In fact the code path of zombie storage removal via FBRs (introduced by HDFS-7960) is practically never triggered. Because heartbeats are much more often, the removal of zombies goes through heartbeats. So if this is unsafe as you assume we should have the evidence as it is happening right now. I agree this is complex, but we've learned a lot and now have a very good understanding of the workflow. Let's reach the consensus. I thought we had a silent one because nobody commented until the patch was submitted. It takes a lot of time and testing, on multiple branches, so waiting till the last moment is not productive. > BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order > -------------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-10301 > URL: https://issues.apache.org/jira/browse/HDFS-10301 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.6.1 > Reporter: Konstantin Shvachko > Assignee: Vinitha Reddy Gankidi > Priority: Critical > Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, HDFS-10301.sample.patch, zombieStorageLogs.rtf > > > When NameNode is busy a DataNode can timeout sending a block report. Then it sends the block report again. Then NameNode while process these two reports at the same time can interleave processing storages from different reports. This screws up the blockReportId field, which makes NameNode think that some storages are zombie. Replicas from zombie storages are immediately removed, causing missing blocks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org