Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 26369 invoked from network); 10 Feb 2009 16:02:31 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 10 Feb 2009 16:02:31 -0000 Received: (qmail 70732 invoked by uid 500); 10 Feb 2009 16:02:27 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 70703 invoked by uid 500); 10 Feb 2009 16:02:27 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 70692 invoked by uid 99); 10 Feb 2009 16:02:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Feb 2009 08:02:27 -0800 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Feb 2009 16:02:25 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 4AC46234C4B8 for ; Tue, 10 Feb 2009 08:02:04 -0800 (PST) Message-ID: <129936382.1234281724305.JavaMail.jira@brutus> Date: Tue, 10 Feb 2009 08:02:04 -0800 (PST) From: "Raghu Angadi (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-4760) HDFS streams should not throw exceptions when closed twice In-Reply-To: <1118771480.1228295504206.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-4760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12672298#action_12672298 ] Raghu Angadi commented on HADOOP-4760: -------------------------------------- bq. import statements are reordered and * are converted to actual classes again by save actions. As in this case here, the import statements are constantly switched between actual class names or * between patches. hmm.. I am pretty sure eclipse can be configured not to do that (in fact, by default it may not do that). If every patch includes a lot of corrections like this, it would be pretty hard to track and maintain. There might even be constants flips committed due to minor variations in different eclipse configurations or JDKs used by eclipse environments. Pretty error prone as well. I am -0.5 on these. I might be biased in this since I make sure my patch is not polluted even by minor white space changes. At least two separate patches would be much better. Note that it should be ok to fix the code just around the actual code changes. > HDFS streams should not throw exceptions when closed twice > ---------------------------------------------------------- > > Key: HADOOP-4760 > URL: https://issues.apache.org/jira/browse/HADOOP-4760 > Project: Hadoop Core > Issue Type: Bug > Components: fs/s3 > Affects Versions: 0.18.4, 0.19.1, 0.20.0, 0.21.0 > Environment: all > Reporter: Alejandro Abdelnur > Assignee: Enis Soztutar > Fix For: 0.20.0 > > Attachments: closehdfsstream_v1.patch, closehdfsstream_v2.patch > > > When adding an {{InputStream}} via {{addResource(InputStream)}} to a {{Configuration}} instance, if the stream is a HDFS stream the {{loadResource(..)}} method fails with {{IOException}} indicating that the stream has already been closed. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.