hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From cutt...@apache.org
Subject svn commit: r492243 - in /lucene/hadoop/trunk: CHANGES.txt src/java/org/apache/hadoop/io/SequenceFile.java
Date Wed, 03 Jan 2007 18:28:27 GMT
Author: cutting
Date: Wed Jan  3 10:28:26 2007
New Revision: 492243

URL: http://svn.apache.org/viewvc?view=rev&rev=492243
Log:
HADOOP-849.  Fix OutOfMemory exceptions in TaskTracker due to a file handle leak in SequenceFile.
 Contributed by Devaraj.

Modified:
    lucene/hadoop/trunk/CHANGES.txt
    lucene/hadoop/trunk/src/java/org/apache/hadoop/io/SequenceFile.java

Modified: lucene/hadoop/trunk/CHANGES.txt
URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/CHANGES.txt?view=diff&rev=492243&r1=492242&r2=492243
==============================================================================
--- lucene/hadoop/trunk/CHANGES.txt (original)
+++ lucene/hadoop/trunk/CHANGES.txt Wed Jan  3 10:28:26 2007
@@ -156,6 +156,10 @@
 43. HADOOP-844.  Send metrics messages on a fixed-delay schedule
     instead of a fixed-rate schedule.  (David Bowen via cutting)
 
+44. HADOOP-849.  Fix OutOfMemory exceptions in TaskTracker due to a
+    file handle leak in SequenceFile.  (Devaraj Das via cutting)
+
+
 Release 0.9.2 - 2006-12-15
 
  1. HADOOP-639. Restructure InterTrackerProtocol to make task

Modified: lucene/hadoop/trunk/src/java/org/apache/hadoop/io/SequenceFile.java
URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/java/org/apache/hadoop/io/SequenceFile.java?view=diff&rev=492243&r1=492242&r2=492243
==============================================================================
--- lucene/hadoop/trunk/src/java/org/apache/hadoop/io/SequenceFile.java (original)
+++ lucene/hadoop/trunk/src/java/org/apache/hadoop/io/SequenceFile.java Wed Jan  3 10:28:26
2007
@@ -2167,6 +2167,17 @@
             //queue
             this.close();
             
+            //this is required to handle the corner case where we have empty
+            //map outputs to merge. The empty map outputs will just have the 
+            //sequence file header; they won't be inserted in the priority 
+            //queue. Thus, they won't be deleted in the regular process where 
+            //cleanup happens when a stream is popped off (when the key/value
+            //from that stream has been iterated over) from the queue.
+            for (int i = 0; i < mStream.length; i++) {
+              if (mStream[i].in != null) //true if cleanup didn't happen
+                mStream[i].cleanup();
+            }
+
             SegmentDescriptor tempSegment = 
                  new SegmentDescriptor(0, fs.getLength(outputFile), outputFile);
             //put the segment back in the TreeMap
@@ -2305,6 +2316,7 @@
       /** closes the underlying reader */
       private void close() throws IOException {
         this.in.close();
+        this.in = null;
       }
 
       /** The default cleanup. Subclasses can override this with a custom 



Mime
View raw message