hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5662) Can't decommission a DataNode due to file's replication factor larger than the rest of the cluster size
Date Thu, 12 Dec 2013 22:15:07 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13846817#comment-13846817
] 

Brandon Li commented on HDFS-5662:
----------------------------------

Thanks for generalizing the problem. 
{quote}...adding dfs.namenode.replication.decom.min and set it to default value of 2. {quote}
How about just using the default replication factor? since the default replication factor
is expected to be chosen by the administrators, possibly based on the reliability of their
deployment environment.

> Can't decommission a DataNode due to file's replication factor larger than the rest of
the cluster size
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5662
>                 URL: https://issues.apache.org/jira/browse/HDFS-5662
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Brandon Li
>            Assignee: Brandon Li
>
> A datanode can't be decommissioned if it has replica that belongs to a file with a replication
factor larger than the rest of the cluster size.
> One way to fix this is to have some kind of minimum replication factor setting and thus
any datanode can be decommissioned regardless of the largest replication factor it's related
to. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Mime
View raw message