Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id DC0651090E for ; Thu, 5 Mar 2015 09:50:39 +0000 (UTC) Received: (qmail 66846 invoked by uid 500); 5 Mar 2015 09:50:39 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 66802 invoked by uid 500); 5 Mar 2015 09:50:39 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 66716 invoked by uid 99); 5 Mar 2015 09:50:39 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Mar 2015 09:50:39 +0000 Date: Thu, 5 Mar 2015 09:50:39 +0000 (UTC) From: "Walter Su (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Assigned] (HDFS-7068) Support multiple block placement policies MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walter Su reassigned HDFS-7068: ------------------------------- Assignee: Walter Su (was: Zesheng Wu) > Support multiple block placement policies > ----------------------------------------- > > Key: HDFS-7068 > URL: https://issues.apache.org/jira/browse/HDFS-7068 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Affects Versions: 2.5.1 > Reporter: Zesheng Wu > Assignee: Walter Su > > According to the code, the current implement of HDFS only supports one specific type of block placement policy, which is BlockPlacementPolicyDefault by default. > The default policy is enough for most of the circumstances, but under some special circumstances, it works not so well. > For example, on a shared cluster, we want to erasure encode all the files under some specified directories. So the files under these directories need to use a new placement policy. > But at the same time, other files still use the default placement policy. Here we need to support multiple placement policies for the HDFS. > One plain thought is that, the default placement policy is still configured as the default. On the other hand, HDFS can let user specify customized placement policy through the extended attributes(xattr). When the HDFS choose the replica targets, it firstly check the customized placement policy, if not specified, it fallbacks to the default one. > Any thoughts? -- This message was sent by Atlassian JIRA (v6.3.4#6332)