hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4861) BlockPlacementPolicyDefault does not consider decommissioning racks
Date Tue, 01 Dec 2015 19:16:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15034394#comment-15034394
] 

Andrew Wang commented on HDFS-4861:
-----------------------------------

I can add some of the color about this policy, we need it for erasure coding for rack fault
tolerance. We want to spread out the blocks in a stripe so none of them share a rack.

> BlockPlacementPolicyDefault does not consider decommissioning racks
> -------------------------------------------------------------------
>
>                 Key: HDFS-4861
>                 URL: https://issues.apache.org/jira/browse/HDFS-4861
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 0.23.7, 2.1.0-beta
>            Reporter: Kihwal Lee
>            Assignee: Rushabh S Shah
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-4861-v2.patch, HDFS-4861.patch
>
>
> getMaxNodesPerRack() calculates the max replicas/rack like this:
> {code}
> int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> Since this does not consider the racks that are being decommissioned and the decommissioning
state is only checked later in isGoodTarget(), certain blocks are not replicated even when
there are many racks and nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message