hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-5705) mapreduce.task.io.sort.mb hardcoded cap at 2047
Date Sun, 05 Jan 2014 03:12:51 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862483#comment-13862483
] 

Harsh J commented on MAPREDUCE-5705:
------------------------------------

Correction: Right JIRA for map side limitation is MAPREDUCE-2308.

> mapreduce.task.io.sort.mb hardcoded cap at 2047
> -----------------------------------------------
>
>                 Key: MAPREDUCE-5705
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5705
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.2.0
>         Environment: Multinode Dell XD720 cluster Centos6 running HDP2
>            Reporter: Joseph Niemiec
>
> mapreduce.task.io.sort.mb is hardcoded to not allow values larger then 2047. If you enter
a value larger then this the map tasks will always crash at this line -
> https://github.com/apache/hadoop-mapreduce/blob/HDFS-641/src/java/org/apache/hadoop/mapred/MapTask.java?source=cc#L746
> The nodes at dev site have over 380 GB of Ram each, we are not able to make the best
use of large mappers (15GB mappers) because of the hardcoded buffer max. Is there a reason
this value has been hardcoded? 
> --
> Also validated on my dev VM. Indeed setting io.sort.mb to 2047 works but 2048 fails.




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message