hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lin Yiqun (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9685) StopDecommission for datanode should remove the underReplicatedBlocks
Date Fri, 22 Jan 2016 14:10:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15112432#comment-15112432
] 

Lin Yiqun commented on HDFS-9685:
---------------------------------

Update the patch. Only when the node' state is decommision-in-progress that will remove blocks.
If the node is already decommisioned, the underReplicatedBlocks is almost replicated.
{code}
public void stopDecommission(DatanodeDescriptor node) {
    if (node.isDecommissionInProgress() || node.isDecommissioned()) {
      AdminStates adminState = node.getAdminState();
      // Update DN stats maintained by HeartbeatManager
      hbManager.stopDecommission(node);
      // Over-replicated blocks will be detected and processed when
      // the dead node comes back and send in its full block report.
      // The original blocks in decomNodes will be removed from
      // neededReplications if node is decommission-in-progress.
      if (node.isAlive()) {
        blockManager.processOverReplicatedBlocksOnReCommission(node);

        if (adminState == AdminStates.DECOMMISSION_INPROGRESS) {
          removeNeededReplicatedBlocksInDecomNodes(node);
        }
      }
      // Remove from tracking in DecommissionManager
      pendingNodes.remove(node);
      decomNodeBlocks.remove(node);
    } else {
      LOG.trace("stopDecommission: Node {} in {}, nothing to do." +
          node, node.getAdminState());
    }
  }
{code}

> StopDecommission for datanode should remove the underReplicatedBlocks
> ---------------------------------------------------------------------
>
>                 Key: HDFS-9685
>                 URL: https://issues.apache.org/jira/browse/HDFS-9685
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Lin Yiqun
>            Assignee: Lin Yiqun
>         Attachments: HDFS-9685.001.patch
>
>
> When one node removed from exclude file, and its state from decommission-in-progress
to in service. But the underReplicatedBlocksNum of cluster has not been decreased. Most of
these underReplicatedBlocks are not needed and it will costs namenode much time to deal with.
And frequently namenode will find there are enough replications. So in {{stopDecommissioned}}
operation, we should remove neededReplicatedBlocks of decomNodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message