hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Junping Du (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-5705) mapreduce.task.io.sort.mb hardcoded cap at 2047
Date Mon, 04 Apr 2016 14:32:25 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224234#comment-15224234
] 

Junping Du commented on MAPREDUCE-5705:
---------------------------------------

MAPREDUCE-2308 is a very old JIRA for MRv1 age. Let's reopen this and fix it in 2.x.

> mapreduce.task.io.sort.mb hardcoded cap at 2047
> -----------------------------------------------
>
>                 Key: MAPREDUCE-5705
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5705
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.2.0
>         Environment: Multinode Dell XD720 cluster Centos6 running HDP2
>            Reporter: Joseph Niemiec
>
> mapreduce.task.io.sort.mb is hardcoded to not allow values larger then 2047. If you enter
a value larger then this the map tasks will always crash at this line -
> https://github.com/apache/hadoop-mapreduce/blob/HDFS-641/src/java/org/apache/hadoop/mapred/MapTask.java?source=cc#L746
> The nodes at dev site have over 380 GB of Ram each, we are not able to make the best
use of large mappers (15GB mappers) because of the hardcoded buffer max. Is there a reason
this value has been hardcoded? 
> --
> Also validated on my dev VM. Indeed setting io.sort.mb to 2047 works but 2048 fails.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message