hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rushabh S Shah (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-4861) BlockPlacementPolicyDefault does not consider decommissioning racks
Date Fri, 04 Apr 2014 22:53:20 GMT

     [ https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Rushabh S Shah updated HDFS-4861:
---------------------------------

    Attachment: HDFS-4861-v2.patch

> BlockPlacementPolicyDefault does not consider decommissioning racks
> -------------------------------------------------------------------
>
>                 Key: HDFS-4861
>                 URL: https://issues.apache.org/jira/browse/HDFS-4861
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 0.23.7, 2.1.0-beta
>            Reporter: Kihwal Lee
>            Assignee: Rushabh S Shah
>         Attachments: HDFS-4861-v2.patch, HDFS-4861.patch
>
>
> getMaxNodesPerRack() calculates the max replicas/rack like this:
> {code}
> int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> Since this does not consider the racks that are being decommissioned and the decommissioning
state is only checked later in isGoodTarget(), certain blocks are not replicated even when
there are many racks and nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message