Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 96347 invoked from network); 10 Mar 2008 23:58:16 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 10 Mar 2008 23:58:16 -0000 Received: (qmail 50351 invoked by uid 500); 10 Mar 2008 23:58:12 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 50322 invoked by uid 500); 10 Mar 2008 23:58:12 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 50313 invoked by uid 99); 10 Mar 2008 23:58:12 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Mar 2008 16:58:12 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 10 Mar 2008 23:57:32 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 6E001234C092 for ; Mon, 10 Mar 2008 16:56:46 -0700 (PDT) Message-ID: <1251116946.1205193406449.JavaMail.jira@brutus> Date: Mon, 10 Mar 2008 16:56:46 -0700 (PDT) From: "dhruba borthakur (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Updated: (HADOOP-2926) Ignoring IOExceptions on close In-Reply-To: <416668273.1204568091072.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dhruba borthakur updated HADOOP-2926: ------------------------------------- Status: Open (was: Patch Available) This would need some rework if we want it for 0.17. > Ignoring IOExceptions on close > ------------------------------ > > Key: HADOOP-2926 > URL: https://issues.apache.org/jira/browse/HADOOP-2926 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.16.0 > Reporter: Owen O'Malley > Assignee: dhruba borthakur > Priority: Critical > Attachments: closeStream.patch > > > Currently in HDFS there are a lot of calls to IOUtils.closeStream that are from finally blocks. I'm worried that this can lead to data corruption in the file system. Take the first instance in DataNode.copyBlock: it writes the block and then calls closeStream on the output stream. If there is an error at the end of the file that is detected in the close, it will be *completely* ignored. Note that logging the error is not enough, the error should be thrown so that the client knows the failure happened. > {code} > try { > file1.write(...); > file2.write(...); > } finally { > IOUtils.closeStream(file); > } > {code} > is *bad*. It must be rewritten as: > {code} > try { > file1.write(...); > file2.write(...); > file1.close(...); > file2.close(...); > } catch (IOException ie) { > IOUtils.closeStream(file1); > IOUtils.closeStream(file2); > throw ie; > } > {code} > I also think that IOUtils.closeStream should be renamed IOUtils.cleanupFailedStream or something to make it clear it can only be used after the write operation has failed and is being cleaned up. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.