Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0CBA917FC7 for ; Thu, 30 Oct 2014 14:01:37 +0000 (UTC) Received: (qmail 30946 invoked by uid 500); 30 Oct 2014 14:01:35 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 30826 invoked by uid 500); 30 Oct 2014 14:01:35 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 30624 invoked by uid 99); 30 Oct 2014 14:01:35 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 30 Oct 2014 14:01:35 +0000 Date: Thu, 30 Oct 2014 14:01:35 +0000 (UTC) From: "Hudson (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-7300) The getMaxNodesPerRack() method in BlockPlacementPolicyDefault is flawed MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190077#comment-14190077 ] Hudson commented on HDFS-7300: ------------------------------ FAILURE: Integrated in Hadoop-Hdfs-trunk #1917 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1917/]) HDFS-7300. HDFS-7300. The getMaxNodesPerRack() method in (kihwal: rev 3ae84e1ba8928879b3eda90e79667ba5a45d60f8) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > The getMaxNodesPerRack() method in BlockPlacementPolicyDefault is flawed > ------------------------------------------------------------------------ > > Key: HDFS-7300 > URL: https://issues.apache.org/jira/browse/HDFS-7300 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Kihwal Lee > Assignee: Kihwal Lee > Priority: Critical > Fix For: 2.6.0 > > Attachments: HDFS-7300.patch, HDFS-7300.v2.patch > > > The {{getMaxNodesPerRack()}} can produce an undesirable result in some cases. > - Three replicas on two racks. The max is 3, so everything can go to one rack. > - Two replicas on two or more racks. The max is 2, both replicas can end up in the same rack. > {{BlockManager#isNeededReplication()}} fixes this after block/file is closed because {{blockHasEnoughRacks()}} will return fail. This is not only extra work, but also can break the favored nodes feature. > When there are two racks and two favored nodes are specified in the same rack, NN may allocate the third replica on a node in the same rack, because {{maxNodesPerRack}} is 3. When closing the file, NN moves a block to the other rack. There is 66% chance that a favored node is moved. If {{maxNodesPerRack}} was 2, this would not happen. -- This message was sent by Atlassian JIRA (v6.3.4#6332)