hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5662) Can't decommission a DataNode due to file's replication factor larger than the rest of the cluster size
Date Thu, 12 Dec 2013 23:45:07 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13846923#comment-13846923
] 

Suresh Srinivas commented on HDFS-5662:
---------------------------------------

bq. How about just using the default replication factor?
This sounds reasonable.

> Can't decommission a DataNode due to file's replication factor larger than the rest of
the cluster size
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>
> A datanode can't be decommissioned if it has replica that belongs to a file with a replication
factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor setting and thus
any datanode can be decommissioned regardless of the largest replication factor it's related
to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Mime
View raw message