hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-5114) getMaxNodesPerRack() in BlockPlacementPolicyDefault does not take decommissioning nodes into account.
Date Tue, 20 Aug 2013 18:28:52 GMT
Kihwal Lee created HDFS-5114:
--------------------------------

             Summary: getMaxNodesPerRack() in BlockPlacementPolicyDefault does not take decommissioning
nodes into account.
                 Key: HDFS-5114
                 URL: https://issues.apache.org/jira/browse/HDFS-5114
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: namenode
    Affects Versions: 3.0.0, 2.1.0-beta
            Reporter: Kihwal Lee


If a large proportion of data nodes are being decommissioned, one or more racks may not be
writable. However this is not taken into account when the default block placement policy module
invokes getMaxNodesPerRack(). Some blocks, especially the ones with a high replication factor,
may not be able to fully replicated until those nodes are taken out of dfs.include.  It can
actually block decommissioning itself.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message