hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7300) The getMaxNodesPerRack() method in BlockPlacementPolicyDefault is flawed
Date Thu, 30 Oct 2014 14:01:35 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14190077#comment-14190077
] 

Hudson commented on HDFS-7300:
------------------------------

FAILURE: Integrated in Hadoop-Hdfs-trunk #1917 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1917/])
HDFS-7300.     HDFS-7300. The getMaxNodesPerRack() method in (kihwal: rev 3ae84e1ba8928879b3eda90e79667ba5a45d60f8)
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppendRestart.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> The getMaxNodesPerRack() method in BlockPlacementPolicyDefault is flawed
> ------------------------------------------------------------------------
>
>                 Key: HDFS-7300
>                 URL: https://issues.apache.org/jira/browse/HDFS-7300
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>             Fix For: 2.6.0
>
>         Attachments: HDFS-7300.patch, HDFS-7300.v2.patch
>
>
> The {{getMaxNodesPerRack()}} can produce an undesirable result in some cases.
> - Three replicas on two racks. The max is 3, so everything can go to one rack.
> - Two replicas on two or more racks. The max is 2, both replicas can end up in the same
rack.
> {{BlockManager#isNeededReplication()}} fixes this after block/file is closed because
{{blockHasEnoughRacks()}} will return fail.  This is not only extra work, but also can break
the favored nodes feature.
> When there are two racks and two favored nodes are specified in the same rack, NN may
allocate the third replica on a node in the same rack, because {{maxNodesPerRack}} is 3. When
closing the file, NN moves a block to the other rack. There is 66% chance that a favored node
is moved.  If {{maxNodesPerRack}} was 2, this would not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message