hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4861) BlockPlacementPolicyDefault does not consider decommissioning racks
Date Tue, 01 Dec 2015 15:36:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033882#comment-15033882
] 

Kihwal Lee commented on HDFS-4861:
----------------------------------

[~shahrs87], why don't you post your proposed patch? If we must maintain the semantics of
{{BlockPlacementPolicyRackFaultTolerant}}, we could move {{maxNodesPerRack}} and other necessary
bits there.

> BlockPlacementPolicyDefault does not consider decommissioning racks
> -------------------------------------------------------------------
>
>                 Key: HDFS-4861
>                 URL: https://issues.apache.org/jira/browse/HDFS-4861
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 0.23.7, 2.1.0-beta
>            Reporter: Kihwal Lee
>            Assignee: Rushabh S Shah
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-4861-v2.patch, HDFS-4861.patch
>
>
> getMaxNodesPerRack() calculates the max replicas/rack like this:
> {code}
> int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> Since this does not consider the racks that are being decommissioned and the decommissioning
state is only checked later in isGoodTarget(), certain blocks are not replicated even when
there are many racks and nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message