hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joseph Niemiec (JIRA)" <j...@apache.org>
Subject [jira] [Created] (MAPREDUCE-5705) mapreduce.task.io.sort.mb hardcocded cap at 2047
Date Fri, 03 Jan 2014 17:59:50 GMT
Joseph Niemiec created MAPREDUCE-5705:
-----------------------------------------

             Summary: mapreduce.task.io.sort.mb hardcocded cap at 2047
                 Key: MAPREDUCE-5705
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5705
             Project: Hadoop Map/Reduce
          Issue Type: Bug
    Affects Versions: 2.2.0
         Environment: Multinode Dell XD720 cluster Centos6 running HDP2
            Reporter: Joseph Niemiec


mapreduce.task.io.sort.mb is hardcoded to not allow values larger then 2047. If you enter
a value larger then this the map tasks will always crash at this line -

https://github.com/apache/hadoop-mapreduce/blob/HDFS-641/src/java/org/apache/hadoop/mapred/MapTask.java?source=cc#L746

The nodes at dev site have over 380 GB of Ram each, we are not able to make the best use of
large mappers (15GB mappers) because of the hardcoded buffer max. Is there a reason this value
has been hardcoded? 


--
Also validated on my dev VM. Indeed setting io.sort.mb to 2047 works but 2048 fails. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message