Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7A3A017948 for ; Wed, 6 May 2015 03:36:17 +0000 (UTC) Received: (qmail 59567 invoked by uid 500); 6 May 2015 03:36:17 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 59516 invoked by uid 500); 6 May 2015 03:36:17 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 59501 invoked by uid 99); 6 May 2015 03:36:17 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 May 2015 03:36:17 +0000 Date: Wed, 6 May 2015 03:36:17 +0000 (UTC) From: "Allen Wittenauer (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-4861) BlockPlacementPolicyDefault does not consider decommissioning racks MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HDFS-4861: ----------------------------------- Labels: BB2015-05-TBR (was: ) > BlockPlacementPolicyDefault does not consider decommissioning racks > ------------------------------------------------------------------- > > Key: HDFS-4861 > URL: https://issues.apache.org/jira/browse/HDFS-4861 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 0.23.7, 2.1.0-beta > Reporter: Kihwal Lee > Assignee: Rushabh S Shah > Labels: BB2015-05-TBR > Attachments: HDFS-4861-v2.patch, HDFS-4861.patch > > > getMaxNodesPerRack() calculates the max replicas/rack like this: > {code} > int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2; > {code} > Since this does not consider the racks that are being decommissioned and the decommissioning state is only checked later in isGoodTarget(), certain blocks are not replicated even when there are many racks and nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)