hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1193) Map/reduce job gets OutOfMemoryException when set map out to be compressed
Date Thu, 24 May 2007 16:41:16 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Arun C Murthy updated HADOOP-1193:
----------------------------------

    Attachment: HADOOP-1193_2_20070524.patch

Here is an updated version of the patch with the changes I made to BigMapOutput to help test
it (basically made it extend ToolBase and added an option to create the large map input too)...

I have tested this with large map inputs (>2G) and seems to hold up well i.e. the codec
pool ensure we create only 1 compressor and very small no. of decompressors (less than 10)
even for extremely large map inputs (>2G).

> Map/reduce job gets OutOfMemoryException when set map out to be compressed
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-1193
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1193
>             Project: Hadoop
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.12.2
>            Reporter: Hairong Kuang
>         Assigned To: Arun C Murthy
>         Attachments: HADOOP-1193_1_20070517.patch, HADOOP-1193_2_20070524.patch
>
>
> One of my jobs quickly fails with the OutOfMemoryException when I set the map out to
be compressed. But it worked fine with release 0.10.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message