hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Li (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5662) Can't decommission a DataNode due to file's replication factor larger than the rest of the cluster size
Date Wed, 18 Dec 2013 23:32:08 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Brandon Li updated HDFS-5662:
-----------------------------

    Attachment: HDFS-5662.branch2.3.patch

The patch to branch2.3 is slightly different in the unit test. Attached it here.

> Can't decommission a DataNode due to file's replication factor larger than the rest of
the cluster size
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>             Fix For: 2.3.0
>
>         Attachments: HDFS-5662.001.patch, HDFS-5662.002.patch, HDFS-5662.branch2.3.patch
>
>
> A datanode can't be decommissioned if it has replica that belongs to a file with a replication
factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor setting and thus
any datanode can be decommissioned regardless of the largest replication factor it's related
to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Mime
View raw message