hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Runping Qi (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-1987) Mapper failed due to out of memory
Date Tue, 02 Oct 2007 21:26:51 GMT
Mapper failed due to out of memory
----------------------------------

                 Key: HADOOP-1987
                 URL: https://issues.apache.org/jira/browse/HADOOP-1987
             Project: Hadoop
          Issue Type: Bug
          Components: mapred
            Reporter: Runping Qi



When a map/reduce job takes block compressed sequence files as input, 
the input data may be expanded significantly in size (a few to tens X, depending on
the compression ratio of the particular data blocks in the files).
This may cause out of memory problem in mappers.

In my case, I set heap space to 1GB.
The mappers started to fail when the accumulated expanded input size reaches above 300MB
 


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message