hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1193) Map/reduce job gets OutOfMemoryException when set map out to be compressed
Date Tue, 24 Apr 2007 22:33:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12491475
] 

Hairong Kuang commented on HADOOP-1193:
---------------------------------------

More details about the failed job:

1. It uses record-level compression
2. mapred.child.java.opts is set to be the default value: --Xmx512m
3. For the mapout, each key is a text and very small, but each value is a jute record, with
an average size of approximate 25K. Some might be as big as mega bytes.

> Map/reduce job gets OutOfMemoryException when set map out to be compressed
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-1193
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1193
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.12.2
>            Reporter: Hairong Kuang
>         Assigned To: Arun C Murthy
>             Fix For: 0.13.0
>
>
> One of my jobs quickly fails with the OutOfMemoryException when I set the map out to
be compressed. But it worked fine with release 0.10.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message