hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (Resolved) (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (MAPREDUCE-13) Mapper failed due to out of memory
Date Sat, 31 Dec 2011 08:54:31 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-13?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Harsh J resolved MAPREDUCE-13.
------------------------------

    Resolution: Not A Problem

This isn't a problem anymore. Compressed inputs work nicely at present and is hardly ever
the cause of maps going OOME.
                
> Mapper failed due to out of memory
> ----------------------------------
>
>                 Key: MAPREDUCE-13
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-13
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Runping Qi
>
> When a map/reduce job takes block compressed sequence files as input, 
> the input data may be expanded significantly in size (a few to tens X, depending on
> the compression ratio of the particular data blocks in the files).
> This may cause out of memory problem in mappers.
> In my case, I set heap space to 1GB.
> The mappers started to fail when the accumulated expanded input size reaches above 300MB
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message