hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-10968) BlockManager#isNewRack should consider decommissioning nodes
Date Thu, 06 Oct 2016 00:54:21 GMT
Jing Zhao created HDFS-10968:

             Summary: BlockManager#isNewRack should consider decommissioning nodes
                 Key: HDFS-10968
                 URL: https://issues.apache.org/jira/browse/HDFS-10968
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: erasure-coding, namenode
    Affects Versions: 3.0.0-alpha1
            Reporter: Jing Zhao
            Assignee: Jing Zhao

For an EC block, it is possible we have enough internal blocks but without enough racks. The
current reconstruction code calls {{BlockManager#isNewRack}} to check if the target node can
increase the total rack number for the case, which compares the target node's rack with source
node racks:
    for (DatanodeDescriptor src : srcs) {
      if (src.getNetworkLocation().equals(target.getNetworkLocation())) {
        return false;
However here the {{srcs}} may include a decommissioning node, in which case we should allow
the target node to be in the same rack with it.

For e.g., suppose we have 11 nodes: h1 ~ h11, which are located in racks r1, r1, r2, r2, r3,
r3, r4, r4, r5, r5, r6, respectively. In case that an EC block has 9 live internal blocks
on (h1~h8 + h11), and one internal block on h9 which is to be decommissioned. The current
code will not choose h10 for reconstruction because isNewRack thinks h10 is on the same rack
with h9.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org

View raw message