hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amareshwari Sriramadasu (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6088) Task stuck in cleanup with OutOfMemoryErrors
Date Fri, 19 Jun 2009 05:59:07 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12721667#action_12721667
] 

Amareshwari Sriramadasu commented on HADOOP-6088:
-------------------------------------------------

Task logs show :
2009-06-12 21:37:11,574 WARN org.apache.hadoop.mapred.TaskTracker: Error running child
java.io.IOException: Task: attempt_200905250540_16139_r_000176_0 - The reduce copier failed
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:382)
	at org.apache.hadoop.mapred.Child.main(Child.java:170)
Caused by: java.lang.OutOfMemoryError: Java heap space
	at org.apache.hadoop.mapred.IFile$Reader.readNextBlock(IFile.java:342)
	at org.apache.hadoop.mapred.IFile$Reader.next(IFile.java:404)
	at org.apache.hadoop.mapred.Merger$Segment.next(Merger.java:184)
	at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:376)
	at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:338)
	at org.apache.hadoop.mapred.Merger.merge(Merger.java:60)
	at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$LocalFSMerger.run(ReduceTask.java:2460)
2009-06-12 21:37:11,597 INFO org.apache.hadoop.mapred.ReduceTask: Read 43883500 bytes from
map-output for attempt_200905250540_16139_m_001504_0
2009-06-12 21:37:11,696 INFO org.apache.hadoop.mapred.TaskRunner: Runnning cleanup for the
task

Similar to FSError, for any error Task should inform TT and get killed forcefully. Currently,
Child catches Throwable and does cleanup, shutting  log Manager etc. We should catch only
Exception here. And Errors should be informed to TT.
Thoughts?


> Task stuck in cleanup with OutOfMemoryErrors
> --------------------------------------------
>
>                 Key: HADOOP-6088
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6088
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>            Reporter: Amareshwari Sriramadasu
>             Fix For: 0.21.0
>
>
> Obesrved a task with OutOfMemory error, stuck in cleanup.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message