Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 723BD98B3 for ; Wed, 25 Jan 2012 21:26:01 +0000 (UTC) Received: (qmail 2560 invoked by uid 500); 25 Jan 2012 21:26:01 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 2348 invoked by uid 500); 25 Jan 2012 21:26:00 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 2339 invoked by uid 99); 25 Jan 2012 21:26:00 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Jan 2012 21:26:00 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Jan 2012 21:25:59 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id F2AFC163040 for ; Wed, 25 Jan 2012 21:25:38 +0000 (UTC) Date: Wed, 25 Jan 2012 21:25:38 +0000 (UTC) From: "Eli Collins (Commented) (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <116428646.78226.1327526738995.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <1099139534.315.1325566341782.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Commented] (HDFS-2742) HA: observed dataloss in replication stress test MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13193341#comment-13193341 ] Eli Collins commented on HDFS-2742: ----------------------------------- Todd, Approach in the latest patch looks good to me. The tests are great. Mostly minor comments. I think BM should distinguish between corrupt and out-of-dates replicas. The new case in processFirstBlockReport in thispatch, and where we mark reported RBW replicas for completed blocks as corrupt are using "corrupt" as a proxy for "please delete". I wasn't able to come up with additional bugs that with a similar cause but it would be easier to reason about if only truly corrupt replicas were marked as such. Can punt to a separate jira, if you agree. In FSNamesystem#isSafeModeTrackingBlocks, shouldn't we assert haEnabled is enabled if we're in SM and shouldIncrementallyTrackBlocks is true, instead of short-circuiting? We currently wouldn't know if we violate this condition because we'll return false if haEnabled. Nits: * s/stam or/stamp or * s/tracing/tracking * increment|decrementSafeBlockCount need indenting fixes * Can remove NameNodeAdapter TestSafeMode diffs * Can remove TODO comment in BM#getActiveBlockCount * Append "null otherwise" to "if it should be kept" in BM Thanks, Eli > HA: observed dataloss in replication stress test > ------------------------------------------------ > > Key: HDFS-2742 > URL: https://issues.apache.org/jira/browse/HDFS-2742 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: data-node, ha, name-node > Affects Versions: HA branch (HDFS-1623) > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Priority: Blocker > Attachments: hdfs-2742.txt, hdfs-2742.txt, hdfs-2742.txt, hdfs-2742.txt, hdfs-2742.txt, log-colorized.txt > > > The replication stress test case failed over the weekend since one of the replicas went missing. Still diagnosing the issue, but it seems like the chain of events was something like: > - a block report was generated on one of the nodes while the block was being written - thus the block report listed the block as RBW > - when the standby replayed this queued message, it was replayed after the file was marked complete. Thus it marked this replica as corrupt > - it asked the DN holding the corrupt replica to delete it. And, I think, removed it from the block map at this time. > - That DN then did another block report before receiving the deletion. This caused it to be re-added to the block map, since it was "FINALIZED" now. > - Replication was lowered on the file, and it counted the above replica as non-corrupt, and asked for the other replicas to be deleted. > - All replicas were lost. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira