hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karthik Kambatla (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-2041) Hard to co-locate MR2 and Spark jobs on the same cluster in YARN
Date Sat, 17 May 2014 01:55:15 GMT

    [ https://issues.apache.org/jira/browse/YARN-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000601#comment-14000601
] 

Karthik Kambatla commented on YARN-2041:
----------------------------------------

yarn.nodemanager.resource.memory-mb should ideally be fixed per node in a YARN cluster. As
[~tgaves] said, we should look at how the individual tasks are scheduled (spread out) and
other relevant information. 

> Hard to co-locate MR2 and Spark jobs on the same cluster in YARN
> ----------------------------------------------------------------
>
>                 Key: YARN-2041
>                 URL: https://issues.apache.org/jira/browse/YARN-2041
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: nodemanager
>    Affects Versions: 2.3.0
>            Reporter: Nishkam Ravi
>
> Performance of MR2 jobs falls drastically as YARN config parameter yarn.nodemanager.resource.memory-mb
 is increased beyond a certain value. 
> Performance of Spark falls drastically as the value of yarn.nodemanager.resource.memory-mb
is decreased beyond a certain value for a large data set.
> This makes it hard to co-locate MR2 and Spark jobs in YARN.
> The experiments are being conducted on a 6-node cluster. The following workloads are
being run: TeraGen, TeraSort, TeraValidate, WordCount, ShuffleText and PageRank.
> Will add more details to this JIRA over time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message