hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brandon Li (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6016) Update datanode replacement policy to make writes more robust
Date Thu, 27 Feb 2014 22:53:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13915151#comment-13915151

Brandon Li commented on HDFS-6016:

The patch looks good.  Some nitpicks:
* also need to update hdfs-default.xml for the property description of dfs.client.block.write.replace-datanode-on-failure.policy
* a couple typos in the comments of getMinimumNumberOfReplicasAllowed()

> Update datanode replacement policy to make writes more robust
> -------------------------------------------------------------
>                 Key: HDFS-6016
>                 URL: https://issues.apache.org/jira/browse/HDFS-6016
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, ha, hdfs-client, namenode
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>         Attachments: HDFS-6016.patch, HDFS-6016.patch
> As discussed in HDFS-5924, writers that are down to only one node due to node failures
can suffer if a DN does not restart in time. We do not worry about writes that began with
single replica. 

This message was sent by Atlassian JIRA

View raw message