hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10042) Heap space error during copy from maptask to reduce task
Date Fri, 11 Oct 2013 17:36:43 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13792852#comment-13792852

Suresh Srinivas commented on HADOOP-10042:

bq. But I think it's a bug (see my reference to other JIRA)
Sorry I could not find it. What is the jira number?

> Heap space error during copy from maptask to reduce task
> --------------------------------------------------------
>                 Key: HADOOP-10042
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10042
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 1.2.1
>         Environment: Ubuntu cluster
>            Reporter: Dieter De Witte
>             Fix For: 1.2.1
>         Attachments: mapred-site.OLDxml
> http://stackoverflow.com/questions/19298357/out-of-memory-error-in-mapreduce-shuffle-phase
> I've described the problem on stackoverflow as well. It contains a link to another JIRA:
> My errors are completely the same: out of memory error when mapred.job.shuffle.input.buffer.percent
= 0.7, the program does work when I put it to 0.2, does this mean the original JIRA was not
> Does anybody have an idea whether this is a mapreduce issue or is it a misconfiguration
from my part?

This message was sent by Atlassian JIRA

View raw message