Return-Path: Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: (qmail 62936 invoked from network); 5 Feb 2010 15:07:49 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 5 Feb 2010 15:07:49 -0000 Received: (qmail 41662 invoked by uid 500); 5 Feb 2010 15:07:49 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 41624 invoked by uid 500); 5 Feb 2010 15:07:49 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 41496 invoked by uid 99); 5 Feb 2010 15:07:49 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 05 Feb 2010 15:07:48 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 05 Feb 2010 15:07:48 +0000 Received: from brutus.apache.org (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id EA64F29A0034 for ; Fri, 5 Feb 2010 07:07:27 -0800 (PST) Message-ID: <374523978.69091265382447958.JavaMail.jira@brutus.apache.org> Date: Fri, 5 Feb 2010 15:07:27 +0000 (UTC) From: "dhruba borthakur (JIRA)" To: hdfs-issues@hadoop.apache.org Subject: [jira] Commented: (HDFS-951) DFSClient should handle all nodes in a pipeline failed. In-Reply-To: <715871088.57461265338108070.JavaMail.jira@brutus.apache.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12830125#action_12830125 ] dhruba borthakur commented on HDFS-951: --------------------------------------- If all datanodes in the pipleline are dead, then the application cannot write anymore to the file. (This can be improved, of course). Are you saying that throwing exceptions to the write/close call (after all datanodes in pipeline have failed) is a problem? Or are you saying that when all datanodes in the pipeline fail, all resources associated with that OutputStream should be automatically released? > DFSClient should handle all nodes in a pipeline failed. > ------------------------------------------------------- > > Key: HDFS-951 > URL: https://issues.apache.org/jira/browse/HDFS-951 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: He Yongqiang > > processDatanodeError-> setupPipelineForAppendOrRecovery will set streamerClosed to be true if all nodes in the pipeline failed in the past, and just return. > Back to run() in data streammer, the logic > if (streamerClosed || hasError || dataQueue.size() == 0 || !clientRunning) { > continue; > } > will just let set closed=true in closeInternal(). > And DataOutputStream will not get a chance to clean up. The DataOutputStream will throw exception or return null for following write/close. > It will leave the file in writing in incomplete state. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.