hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11799) Introduce a config to allow setting up write pipeline with fewer nodes than replication factor
Date Sat, 24 Jun 2017 06:44:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061828#comment-16061828
] 

Yongjun Zhang commented on HDFS-11799:
--------------------------------------

Hi [~brahmareddy],

Thanks for working on this and sorry for the delayed review.

Some comments:

1. Suggest to add some comments about  dtpReplaceDatanodeOnFailureReplication when it was
introduced. And add a default config with description in hdfs-default.xml
2. about the following code
{code}
     final int d;
      try {
        d = findNewDatanode(original);
      } catch (IOException ioe) {
        if (dfsClient.dtpReplaceDatanodeOnFailureReplication > 0 && nodes.length
            >= dfsClient.dtpReplaceDatanodeOnFailureReplication) {
          DFSClient.LOG.warn(
              "Failed to add a new datanode for write pipeline, minimum block replication:"
                  + dfsClient.dtpReplaceDatanodeOnFailureReplication
                  + ", good datanode size: " + nodes.length);
          return;
        }
        throw ioe;
      }
{code}
2.1 add comment explaining things
2.2 add the ioe to the warn, so to expose the ioe in the log

3. Any chance to have a unit test?

Thanks.


> Introduce a config to allow setting up write pipeline with fewer nodes than replication
factor
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-11799
>                 URL: https://issues.apache.org/jira/browse/HDFS-11799
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yongjun Zhang
>         Attachments: HDFS-11799.patch
>
>
> During pipeline recovery, if not enough DNs can be found, if 
> dfs.client.block.write.replace-datanode-on-failure.best-effort
> is enabled, we let the pipeline to continue, even if there is a single DN.
> Similarly, when we create the write pipeline initially, if for some reason we can't find
enough DNs, we can have a similar config to enable writing with a single DN.
> More study will be done.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message