hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ming Ma (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-7442) Optimization for decommission-in-progress check
Date Fri, 01 May 2015 19:53:06 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Ming Ma resolved HDFS-7442.
    Resolution: Duplicate

HDFS-7411 has addressed this issue.

> Optimization for decommission-in-progress check
> -----------------------------------------------
>                 Key: HDFS-7442
>                 URL: https://issues.apache.org/jira/browse/HDFS-7442
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.6.0
>            Reporter: Ming Ma
> 1. {{isReplicationInProgress }} currently rescan all blocks of a given node each time
the method is called; it becomes less efficient as more of its blocks become fully replicated.
Each scan takes FS lock.
> 2. As discussed in HDFS-7374, if the node becomes dead during decommission, it is useful
if the dead node can be marked as decommissioned after all its blocks are fully replicated.
Currently there is no way to check the blocks of dead decomm-in-progress nodes, given the
dead node is removed from blockmap.
> There are mitigations for these limitations. Set dfs.namenode.decommission.nodes.per.interval
to small value for reduce the duration of lock. HDFS-7409 uses global FS state to tell if
a dead node's blocks are fully replicated.
> To address these scenarios, it will be useful to track the decommon-in-progress blocks

This message was sent by Atlassian JIRA

View raw message