hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1557) Deletion of excess replicas should prefer to delete corrupted replicas before deleting valid replicas
Date Tue, 03 Jul 2007 18:09:05 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12509963
] 

dhruba borthakur commented on HADOOP-1557:
------------------------------------------

I agree. There is not much point in validating replicas just before a setReplication call
is issued. Instead, a periodic disk block validation by the Datanode might be handy in detecting
these types of problems.

> Deletion of excess replicas should prefer to delete corrupted replicas before deleting
valid replicas
> -----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-1557
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1557
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>
> Suppose a block has three replicas and two of the replicas are corrupted. If the replication
factor of the file is reduced to 2. The filesystem should preferably delete the two corrupted
replicas, otherwise it could lead to a corrupted file.
> One option would be to make the datanode periodically validate all blocks with their
corresponding CRCs. The other option would be to make the setReplication call validate existing
replicas before deleting excess replicas.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message