Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E61F818B1C for ; Thu, 20 Aug 2015 12:16:47 +0000 (UTC) Received: (qmail 49511 invoked by uid 500); 20 Aug 2015 12:16:47 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 49449 invoked by uid 500); 20 Aug 2015 12:16:47 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 49437 invoked by uid 99); 20 Aug 2015 12:16:47 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 20 Aug 2015 12:16:47 +0000 Date: Thu, 20 Aug 2015 12:16:47 +0000 (UTC) From: "Yi Liu (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-8884) Fail-fast check in BlockPlacementPolicyDefault#chooseTarget MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu updated HDFS-8884: ------------------------- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) > Fail-fast check in BlockPlacementPolicyDefault#chooseTarget > ----------------------------------------------------------- > > Key: HDFS-8884 > URL: https://issues.apache.org/jira/browse/HDFS-8884 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: Yi Liu > Assignee: Yi Liu > Fix For: 2.8.0 > > Attachments: HDFS-8884.001.patch, HDFS-8884.002.patch > > > In current BlockPlacementPolicyDefault, when choosing datanode storage to place block, we have following logic: > {code} > final DatanodeStorageInfo[] storages = DFSUtil.shuffle( > chosenNode.getStorageInfos()); > int i = 0; > boolean search = true; > for (Iterator> iter = storageTypes > .entrySet().iterator(); search && iter.hasNext(); ) { > Map.Entry entry = iter.next(); > for (i = 0; i < storages.length; i++) { > StorageType type = entry.getKey(); > final int newExcludedNodes = addIfIsGoodTarget(storages[i], > {code} > We will iterate (actually two {{for}}, although they are usually small value) all storages of the candidate datanode even the datanode itself is not good (e.g. decommissioned, stale, too busy..), since currently we do all the check in {{addIfIsGoodTarget}}. > We can fail-fast: check the datanode related conditions first, if the datanode is not good, then no need to shuffle and iterate the storages. Then it's more efficient. -- This message was sent by Atlassian JIRA (v6.3.4#6332)