Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C3410188E8 for ; Tue, 9 Feb 2016 03:58:23 +0000 (UTC) Received: (qmail 91135 invoked by uid 500); 9 Feb 2016 03:58:18 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 91058 invoked by uid 500); 9 Feb 2016 03:58:18 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 91005 invoked by uid 99); 9 Feb 2016 03:58:18 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 09 Feb 2016 03:58:18 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 60EC72C1F68 for ; Tue, 9 Feb 2016 03:58:18 +0000 (UTC) Date: Tue, 9 Feb 2016 03:58:18 +0000 (UTC) From: "Walter Su (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-9752) Permanent write failures may happen to slow writers during datanode rolling upgrades MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138306#comment-15138306 ] Walter Su commented on HDFS-9752: --------------------------------- Thanks all for reviewing the patch. The patch depends on HDFS-9347. I just cherry-picked it to 2.6.5. Now I've uploaded the separate patch for 2.7/2.6. > Permanent write failures may happen to slow writers during datanode rolling upgrades > ------------------------------------------------------------------------------------ > > Key: HDFS-9752 > URL: https://issues.apache.org/jira/browse/HDFS-9752 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Kihwal Lee > Assignee: Walter Su > Priority: Critical > Attachments: HDFS-9752-branch-2.6.03.patch, HDFS-9752-branch-2.7.03.patch, HDFS-9752.01.patch, HDFS-9752.02.patch, HDFS-9752.03.patch, HdfsWriter.java > > > When datanodes are being upgraded, an out-of-band ack is sent upstream and the client does a pipeline recovery. The client may hit this multiple times as more nodes get upgraded. This normally does not cause any issue, but if the client is holding the stream open without writing any data during this time, a permanent write failure can occur. > This is because there is a limit of 5 recovery trials for the same packet, which is tracked by "last acked sequence number". Since the empty heartbeat packets for an idle output stream does not increment the sequence number, the write will fail after it seeing 5 pipeline breakages by datanode upgrades. > This check/limit was added to avoid spinning until running out of nodes in the cluster due to a corruption or any other irrecoverable conditions. The datanode upgrade-restart should be excluded from the count. -- This message was sent by Atlassian JIRA (v6.3.4#6332)