Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 62599 invoked from network); 6 Mar 2008 21:50:17 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 6 Mar 2008 21:50:17 -0000 Received: (qmail 9696 invoked by uid 500); 6 Mar 2008 21:50:12 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 9675 invoked by uid 500); 6 Mar 2008 21:50:12 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 9661 invoked by uid 99); 6 Mar 2008 21:50:12 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Mar 2008 13:50:12 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Mar 2008 21:49:32 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 0D056234C099 for ; Thu, 6 Mar 2008 13:48:58 -0800 (PST) Message-ID: <1355647616.1204840138051.JavaMail.jira@brutus> Date: Thu, 6 Mar 2008 13:48:58 -0800 (PST) From: "dhruba borthakur (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Updated: (HADOOP-2926) Ignoring IOExceptions on close In-Reply-To: <416668273.1204568091072.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dhruba borthakur updated HADOOP-2926: ------------------------------------- Fix Version/s: (was: 0.16.1) Looking at the code more closely, it appears that there isn't a bug that this patch addresses. The DatanOde correctly ignores exceptions. This issue is left open to address coding style-related issues. One suggestion is to make it log an error message when the close throws an exception. Another suggestion is to change the name of this method to closeIgnoreExceptions(). > Ignoring IOExceptions on close > ------------------------------ > > Key: HADOOP-2926 > URL: https://issues.apache.org/jira/browse/HADOOP-2926 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.16.0 > Reporter: Owen O'Malley > Assignee: dhruba borthakur > Priority: Critical > Attachments: closeStream.patch > > > Currently in HDFS there are a lot of calls to IOUtils.closeStream that are from finally blocks. I'm worried that this can lead to data corruption in the file system. Take the first instance in DataNode.copyBlock: it writes the block and then calls closeStream on the output stream. If there is an error at the end of the file that is detected in the close, it will be *completely* ignored. Note that logging the error is not enough, the error should be thrown so that the client knows the failure happened. > {code} > try { > file1.write(...); > file2.write(...); > } finally { > IOUtils.closeStream(file); > } > {code} > is *bad*. It must be rewritten as: > {code} > try { > file1.write(...); > file2.write(...); > file1.close(...); > file2.close(...); > } catch (IOException ie) { > IOUtils.closeStream(file1); > IOUtils.closeStream(file2); > throw ie; > } > {code} > I also think that IOUtils.closeStream should be renamed IOUtils.cleanupFailedStream or something to make it clear it can only be used after the write operation has failed and is being cleaned up. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.