hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiao Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9314) Improve BlockPlacementPolicyDefault's picking of excess replicas
Date Mon, 23 Nov 2015 06:10:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021568#comment-15021568
] 

Xiao Chen commented on HDFS-9314:
---------------------------------

Thanks Walter.
>From patch 3, the implementation is no longer as a fallback strategy, but as a guarantee-the-number-of-remaining-racks-don't-go-<2
strategy. See [comments above|https://issues.apache.org/jira/browse/HDFS-9314?focusedCommentId=15012152&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15012152]
between Ming and me for details about this decision.
So after the changes to default policy:
{code}
  /* If only 1 rack, pick all. If 2 racks, pick all that have more than
   * 1 replicas on the same rack; if no such replicas, pick all.
   * If 3 or more racks, pick all.
   */
{code}
Above said, currently the node-group policy favors {{first}} with node-group specific logic
as long as {{first}} is not empty. Then when choosing from {{moreThanOne}} and {{exactlyOne}},
we could apply default logic here, but instead of passing in {{rackMap}} we pass in {{nodeGroupMap}}.
I'm not sure from requirement perspective if this is acceptable, but it would be more consistent
logically. Makes sense? Also asking [~mingma] for advice.

Attached patch 7 implements this idea. FYI - the only difference between patch 6 and 7 is
the following:
{code}
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyWithNodeGroup.java
@@ -367,7 +367,7 @@ private int addDependentNodesToExcludedNodes(DatanodeDescriptor chosenNode,
-    return moreThanOne.isEmpty()? exactlyOne : moreThanOne;
+    return super.pickupReplicaSet(moreThanOne, exactlyOne, nodeGroupMap);
   }
{code}

> Improve BlockPlacementPolicyDefault's picking of excess replicas
> ----------------------------------------------------------------
>
>                 Key: HDFS-9314
>                 URL: https://issues.apache.org/jira/browse/HDFS-9314
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Ming Ma
>            Assignee: Xiao Chen
>         Attachments: HDFS-9314.001.patch, HDFS-9314.002.patch, HDFS-9314.003.patch, HDFS-9314.004.patch,
HDFS-9314.005.patch
>
>
> The test case used in HDFS-9313 identified NullPointerException as well as the limitation
of excess replica picking. If the current replicas are on {SSD(rack r1), DISK(rack 2), DISK(rack
3), DISK(rack 3)} and the storage policy changes to HOT_STORAGE_POLICY_ID, BlockPlacementPolicyDefault's
won't be able to delete SSD replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message