hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brahma Reddy Battula (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor
Date Tue, 19 Sep 2017 18:31:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Brahma Reddy Battula updated HDFS-11799:
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 3.1.0
     Release Note: 
Added new configuration "dfs.client.block.write.replace-datanode-on-failure.min-replication".
    The minimum number of replications that are needed to not to fail
      the write pipeline if new datanodes can not be found to replace
      failed datanodes (could be due to network failure) in the write pipeline.
      If the number of the remaining datanodes in the write pipeline is greater
      than or equal to this property value, continue writing to the remaining nodes.
      Otherwise throw exception.

      If this is set to 0, an exception will be thrown, when a replacement
      can not be found.
           Status: Resolved  (was: Patch Available)

Committed to {{trunk}},{{branch-3.0}},{{branch-2}} and {{branch-2.8}}. [~yzhangal] thanks
a lot for continuous review. Resolved minor conflicts for {{branch-2}} and {{branch-2.8}}
and ran the testcase locally.

> Introduce a config to allow setting up write pipeline with fewer nodes than replication
> ----------------------------------------------------------------------------------------------
>                 Key: HDFS-11799
>                 URL: https://issues.apache.org/jira/browse/HDFS-11799
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yongjun Zhang
>            Assignee: Brahma Reddy Battula
>             Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0
>         Attachments: HDFS-11799-002.patch, HDFS-11799-003.patch, HDFS-11799-004.patch,
HDFS-11799-005.patch, HDFS-11799-006.patch, HDFS-11799-007.patch, HDFS-11799-008.patch, HDFS-11799-009.patch,
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we can't find
enough DNs, we can have a similar config to enable writing with a single DN.
> More study will be done.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message