Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 80F7C18602 for ; Tue, 17 Nov 2015 02:17:11 +0000 (UTC) Received: (qmail 87485 invoked by uid 500); 17 Nov 2015 02:17:11 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 87418 invoked by uid 500); 17 Nov 2015 02:17:11 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 87389 invoked by uid 99); 17 Nov 2015 02:17:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 Nov 2015 02:17:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 07A252C1F6A for ; Tue, 17 Nov 2015 02:17:11 +0000 (UTC) Date: Tue, 17 Nov 2015 02:17:11 +0000 (UTC) From: "Walter Su (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007848#comment-15007848 ] Walter Su commented on HDFS-9314: --------------------------------- Shall we consider not to violate the placement policy when we decide to decrease rack counts? In [HDFS-9313|https://issues.apache.org/jira/browse/HDFS-9313?focusedCommentId=14975592&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14975592] I agree to choose the SSD, since the remaining 3 DISKs are already on enough racks. Consider a different case, ||rack_0||rack_1|| |replica_0(DISK)|replica_3(SSD)| |replica_1(DISK)| |replica_2(DISK)| If we remove replica_3, the left are not on enough rack. Is it removing replica_3 the right decision any more? Shall we make sure not to violate the placement policy, before decide to decrease rack counts? We can schedule a replication task first, then come back to delete the SSD. The old logic will choose from {{exactlyOne}} collection as well, but only if {{moreThanOne}} is empty. So the old logic doesn't violate the placement policy. > Improve BlockPlacementPolicyDefault's picking of excess replicas > ---------------------------------------------------------------- > > Key: HDFS-9314 > URL: https://issues.apache.org/jira/browse/HDFS-9314 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Ming Ma > Assignee: Xiao Chen > Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch > > > The test case used in HDFS-9313 identified NullPointerException as well as the limitation of excess replica picking. If the current replicas are on {SSD(rack r1), DISK(rack 1), DISK(rack 2), DISK(rack 2)} and the storage policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's won't be able to delete SSD replica. -- This message was sent by Atlassian JIRA (v6.3.4#6332)