hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1184) Decommission fails if a block that needs replication has only one replica
Date Thu, 03 May 2007 17:36:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12493431

dhruba borthakur commented on HADOOP-1184:

The change in API to UnderReplicatedBlocks  is required because it is not an inner class of
FSNamesystem anymore. This change was made to facilitate fine-grain locking of the namenode
data structures. The locking model is simpler if UnderReplicatedBlocks does not access global
variables/methods of FSnamesystem. Please let me know if this sounds reasonable.

> Decommission fails if a block that needs replication has only one replica
> -------------------------------------------------------------------------
>                 Key: HADOOP-1184
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1184
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>         Attachments: decommissionOneReplica3.patch, decommissionOneReplica4.patch, decommissionOneReplica5.patch,
> If the only replica of a block resides on a node being decommissioned, then the decommission
command does not complete. The blocks do not get added to neededReplication because neededReplications.update()
believes that the number of current replicas is zero.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message