hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Devaraj Das (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-849) randomwriter fails with 'java.lang.OutOfMemoryError: Java heap space' in the 'reduce' task
Date Thu, 28 Dec 2006 10:54:24 GMT
     [ http://issues.apache.org/jira/browse/HADOOP-849?page=all ]

Devaraj Das updated HADOOP-849:
-------------------------------

    Attachment: 849.patch

This patch fixes the problem. The merge code that opens map output files for reading used
to never close the empty map output files (empty means that the sequence files contains just
the sequence file header without any key/val data). For the RandomWriter case, this problem
shows up as OutOfMemory, which basically means that the program is out of file descriptors
after merging 100s of empty map output files (in the case of RandomWriter, all map outputs
are empty; the data is written directly to the dfs).

> randomwriter fails with 'java.lang.OutOfMemoryError: Java heap space' in the 'reduce'
task
> ------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-849
>                 URL: http://issues.apache.org/jira/browse/HADOOP-849
>             Project: Hadoop
>          Issue Type: Bug
>            Reporter: Arun C Murthy
>         Assigned To: Devaraj Das
>             Fix For: 0.9.2
>
>         Attachments: 849.patch
>
>
> Reproducible, tried to increase the child jvm's heapsize via 
> <property>
>   <name>mapred.child.java.opts</name>
>   <value>-Xmx512m</value>
> </property>
> without any difference, still fails.
> Need to investigate further.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message