hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nigel Daley (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1012) OutOfMemoryError in reduce
Date Sun, 18 Feb 2007 06:40:05 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12473981
] 

Nigel Daley commented on HADOOP-1012:
-------------------------------------

Running on JDK 6, I now get this stack trace for the OOMError:

2007-02-18 03:53:21,906 WARN org.apache.hadoop.mapred.TaskRunner: Merge of the inmemory files
threw an exception: java.lang.OutOfMemoryError: Java heap space
	at java.io.BufferedInputStream.<init>(BufferedInputStream.java:178)
	at org.apache.hadoop.fs.FSDataInputStream$Buffer.<init>(FSDataInputStream.java:248)
	at org.apache.hadoop.fs.FSDataInputStream.<init>(FSDataInputStream.java:327)
	at org.apache.hadoop.fs.FSDataInputStream.<init>(FSDataInputStream.java:321)
	at org.apache.hadoop.fs.FSDataInputStream$Checker.<init>(FSDataInputStream.java:60)
	at org.apache.hadoop.fs.FSDataInputStream.<init>(FSDataInputStream.java:300)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:256)
	at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1167)
	at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1102)
	at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:2531)
	at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.merge(SequenceFile.java:2391)
	at org.apache.hadoop.io.SequenceFile$Sorter.merge(SequenceFile.java:2135)
	at org.apache.hadoop.mapred.ReduceTaskRunner.prepare(ReduceTaskRunner.java:615)
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:135)

> OutOfMemoryError in reduce
> --------------------------
>
>                 Key: HADOOP-1012
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1012
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.11.1
>            Reporter: Nigel Daley
>            Priority: Critical
>
> I'm seeing OutOfMemoryErrors from a reduce in each of DFSIO Benchmark and RandomWriter.
 No stack traces are given.  Snipets from the TaskTracker logs are below.  I believe I first
saw this on February 3rd during tests that I run weekly.
> =====
> DFSIO
> =====
> ...
> 2007-02-10 18:25:20,201 INFO org.apache.hadoop.mapred.TaskRunner: task_0005_r_000000_0
Copying of all map outputs complete. Initiating the last merge on the remaining files in ramfs://mapoutput9105104
> 2007-02-10 18:25:20,771 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:21,773 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:23,280 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:24,607 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:25,960 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:27,105 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:28,982 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:29,984 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:31,481 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:33,379 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:34,478 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:35,656 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:36,758 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:42,593 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:43,600 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:46,573 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:48,791 INFO org.apache.hadoop.mapred.TaskTracker: task_0005_r_000000_0
0.33333334% reduce > copy (9000 of 9000 at 0.00 MB/s)
> 2007-02-10 18:25:49,828 WARN org.apache.hadoop.mapred.TaskRunner: Merge of the inmemory
files threw an exception: java.lang.OutOfMemoryError: Java heap space
> ...
> ============
> RandomWriter
> ============
> ...
> 2007-02-11 03:58:00,887 INFO org.apache.hadoop.mapred.TaskRunner: task_0001_r_000000_3
Copying of all map outputs complete. Initiating the last merge on the remaining files in ramfs://mapoutput6576294
> 2007-02-11 03:58:01,681 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:02,921 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:03,923 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:05,375 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:06,742 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:08,818 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:09,821 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:11,406 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:13,277 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:14,280 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:15,282 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:16,284 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:18,401 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:19,403 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:20,636 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:37,860 INFO org.apache.hadoop.mapred.TaskTracker: task_0001_r_000000_3
0.33333334% reduce > copy (8890 of 8890 at 0.00 MB/s)
> 2007-02-11 03:58:37,898 WARN org.apache.hadoop.mapred.TaskRunner: task_0001_r_000000_3
Child Error
> java.lang.OutOfMemoryError: Java heap space
> ...

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message