Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F0B5818392 for ; Wed, 3 Feb 2016 22:12:39 +0000 (UTC) Received: (qmail 54837 invoked by uid 500); 3 Feb 2016 22:11:40 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 54797 invoked by uid 500); 3 Feb 2016 22:11:40 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 54775 invoked by uid 99); 3 Feb 2016 22:11:39 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 03 Feb 2016 22:11:39 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id C224C2C14F7 for ; Wed, 3 Feb 2016 22:11:39 +0000 (UTC) Date: Wed, 3 Feb 2016 22:11:39 +0000 (UTC) From: "Kihwal Lee (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (HDFS-9752) Permanent write failures may happen to slow writers during datanode rolling upgrades MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Kihwal Lee created HDFS-9752: -------------------------------- Summary: Permanent write failures may happen to slow writers during datanode rolling upgrades Key: HDFS-9752 URL: https://issues.apache.org/jira/browse/HDFS-9752 Project: Hadoop HDFS Issue Type: Bug Reporter: Kihwal Lee Priority: Critical When datanodes are being upgraded, an out-of-band ack is sent upstream and the client does a pipeline recovery. The client may hit this multiple times as more nodes get upgraded. This normally does not cause any issue, but if the client is holding the stream open without writing any data during this time, a permanent write failure can occur. This is because there is a limit of 5 recovery trials for the same packet, which is tracked by "last acked sequence number". Since the empty heartbeat packets for an idle output stream does not increment the sequence number, the write will fail after it seeing 5 pipeline breakages by datanode upgrades. This check/limit was added to avoid spinning until running out of nodes in the cluster due to a corruption or any other irrecoverable conditions. The datanode upgrade-restart should be excluded from the count. -- This message was sent by Atlassian JIRA (v6.3.4#6332)