hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Srikanth Upputuri (JIRA)" <j...@apache.org>
Subject [jira] [Work started] (HDFS-7082) When replication factor equals number of data nodes, corrupt replica will never get substituted with good replica
Date Thu, 18 Sep 2014 10:14:34 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Work on HDFS-7082 started by Srikanth Upputuri.
-----------------------------------------------
> When replication factor equals number of data nodes, corrupt replica will never get substituted
with good replica
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7082
>                 URL: https://issues.apache.org/jira/browse/HDFS-7082
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>            Reporter: Srikanth Upputuri
>            Assignee: Srikanth Upputuri
>            Priority: Minor
>
> BlockManager will not invalidate a corrupt replica if this brings down the total number
of replicas below replication factor (except if the corrupt replica has a wrong genstamp).
On clusters where the replication factor = total data nodes, a new replica can not be created
from a live replica as all the available datanodes already have a replica each. Because of
this, the corrupt replicas will never be substituted with good replicas, so will never get
deleted. Sooner or later all replicas may get corrupt and there will be no live replicas in
the cluster for this block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message