hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-6016) Update datanode replacement policy to make writes more robust
Date Wed, 13 Aug 2014 15:46:13 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Kihwal Lee updated HDFS-6016:
-----------------------------

    Resolution: Won't Fix
        Status: Resolved  (was: Patch Available)

Per Nicholas' comment, I won't fix it.

> Update datanode replacement policy to make writes more robust
> -------------------------------------------------------------
>
>                 Key: HDFS-6016
>                 URL: https://issues.apache.org/jira/browse/HDFS-6016
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, ha, hdfs-client, namenode
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>         Attachments: HDFS-6016.patch, HDFS-6016.patch
>
>
> As discussed in HDFS-5924, writers that are down to only one node due to node failures
can suffer if a DN does not restart in time. We do not worry about writes that began with
single replica. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message