Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 93114 invoked from network); 11 Jan 2007 18:51:50 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 11 Jan 2007 18:51:50 -0000 Received: (qmail 64881 invoked by uid 500); 11 Jan 2007 18:51:55 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 64853 invoked by uid 500); 11 Jan 2007 18:51:55 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 64844 invoked by uid 99); 11 Jan 2007 18:51:55 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Jan 2007 10:51:55 -0800 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Jan 2007 10:51:47 -0800 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 9682A7142F7 for ; Thu, 11 Jan 2007 10:51:27 -0800 (PST) Message-ID: <27725596.1168541487604.JavaMail.jira@brutus> Date: Thu, 11 Jan 2007 10:51:27 -0800 (PST) From: "Raghu Angadi (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-757) "Bad File Descriptor" in closing DFS file In-Reply-To: <30899512.1164778521544.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12463989 ] Raghu Angadi commented on HADOOP-757: ------------------------------------- HADOOP-758 has a patch that handles exceptions better and ignores the exception show in the description above. It is not enough since we are actually trying to write a fd that we don't own, which could corrupt data if someone else gets that fd. Not sure why FileOutputStream does not detect that it is already closed. Fix is to set backupStream to null when it is closed and handle. > "Bad File Descriptor" in closing DFS file > ----------------------------------------- > > Key: HADOOP-757 > URL: https://issues.apache.org/jira/browse/HADOOP-757 > Project: Hadoop > Issue Type: Bug > Components: dfs > Affects Versions: 0.8.0 > Reporter: Owen O'Malley > Assigned To: Raghu Angadi > > Running the sort benchmark, I had a reduce fail with a DFS error: > java.io.IOException: Bad file descriptor > at java.io.FileOutputStream.writeBytes(Native Method) > at java.io.FileOutputStream.write(FileOutputStream.java:260) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flushData(DFSClient.java:1128) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1114) > at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1241) > at java.io.FilterOutputStream.close(FilterOutputStream.java:143) > at org.apache.hadoop.fs.FSDataOutputStream$Summer.close(FSDataOutputStream.java:99) > at java.io.FilterOutputStream.close(FilterOutputStream.java:143) > at java.io.FilterOutputStream.close(FilterOutputStream.java:143) > at java.io.FilterOutputStream.close(FilterOutputStream.java:143) > at org.apache.hadoop.io.SequenceFile$Writer.close(SequenceFile.java:515) > at org.apache.hadoop.mapred.SequenceFileOutputFormat$1.close(SequenceFileOutputFormat.java:71) > at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:310) > at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1271) -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira