hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yi Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9106) Transfer failure during pipeline recovery causes permanent write failures
Date Mon, 21 Sep 2015 13:10:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14900647#comment-14900647
] 

Yi Liu commented on HDFS-9106:
------------------------------

Thanks [~kihwal] for working on this.  I have few comments:

*1.* 
{code}
+      try {
+        //get a new datanode
+        lb = dfsClient.namenode.getAdditionalDatanode(
+            src, stat.getFileId(), block, nodes, storageIDs,
+            exclude.toArray(new DatanodeInfo[exclude.size()]),
+            1, dfsClient.clientName);
+      } catch (IOException ioe) {
+        DFSClient.LOG.warn("Error while asking for a new node to namenode: "
+            + ioe.getMessage());
+        caughtException = ioe;
+        tried++;
+        continue;
+      }
{code}
I see you catch the IOException of rpc to NameNode, for {{dfsClient.namenode}}, we already
have retry policy for rpc to namenode. I wonder what IOExceptions do you want to handle here?

*2.*
Followings look reasonable.
{quote}
Transfer timeout needs to be different from per-packet timeout.
transfer should be retried if fails.
{quote}
In the patch, it allows 3 tries, so ideally we can try 3 different datanodes.  My doubt is
originally why we have {{bestEffort}} instead of implementing the retries? Is it for performance
consideration?   It rarely happens after we retry 3 times then still can't find a good datanode
to replace.

> Transfer failure during pipeline recovery causes permanent write failures
> -------------------------------------------------------------------------
>
>                 Key: HDFS-9106
>                 URL: https://issues.apache.org/jira/browse/HDFS-9106
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>         Attachments: HDFS-9106-poc.patch
>
>
> When a new node is added to a write pipeline during flush/sync, if the partial block
transfer fails, the write will fail permanently without retrying or continuing with whatever
is in the pipeline. 
> The transfer often fails in busy clusters due to timeout. There is no per-packet ACK
between client and datanode or between source and target datanodes. If the total transfer
time exceeds the configured timeout + 10 seconds (2 * 5 seconds slack), it is considered failed.
 Naturally, the failure rate is higher with bigger block sizes.
> I propose following changes:
> - Transfer timeout needs to be different from per-packet timeout.
> - transfer should be retried if fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message