hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dmytro Molkov (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HDFS-1351) Make it possible for BlockPlacementPolicy to return null
Date Mon, 30 Aug 2010 23:54:54 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Dmytro Molkov resolved HDFS-1351.

    Resolution: Invalid

Sorry, after talking with Hairong I realized that it will not be possible to make the fix
this easy. The reason return value cannot be null is that at this point NameNode knows it
has to delete extra replica of the block. If we skip this deletion it will not know the block
has extra replicas until the next time it does full rescan (on restart). So this jira is itself

> Make it possible for BlockPlacementPolicy to return null
> --------------------------------------------------------
>                 Key: HDFS-1351
>                 URL: https://issues.apache.org/jira/browse/HDFS-1351
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: name-node
>    Affects Versions: 0.22.0
>            Reporter: Dmytro Molkov
>            Assignee: Dmytro Molkov
>         Attachments: HDFS-1351.patch
> The idea is to modify FSNamesystem.chooseExcessReplicates code, so it can accept a null
return from chooseReplicaToDelete which will indicate that NameNode should not be deleting
extra replicas.
> One possible usecase - if there are nodes being added to the cluster that might have
corrupt replicas on them you do not want to delete other replicas until the block scanner
finished scanning every block on the datanode.
> This will require additional work on the implementation of the BlockPlacementPolicy,
but with this JIRA I just wanted to create a basis for future improvements.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message