hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "He Tianyi (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6101) Delay scheduling for node resource balance
Date Sat, 04 Feb 2017 01:54:51 GMT

    [ https://issues.apache.org/jira/browse/YARN-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15852480#comment-15852480
] 

He Tianyi commented on YARN-6101:
---------------------------------

Thanks for reply, [~tangzhankun].
It's a typo, memory should be 1 GB left.

I cannot share raw SLS configuration since it is generated directly from a production cluster
(with 2500 nodes), by tracking all applications submitted on that day, including Spark, MapReduce
and other frameworks. 

> Delay scheduling for node resource balance
> ------------------------------------------
>
>                 Key: YARN-6101
>                 URL: https://issues.apache.org/jira/browse/YARN-6101
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: fairscheduler
>            Reporter: He Tianyi
>            Priority: Minor
>         Attachments: YARN-6101.preliminary.0000.patch
>
>
> We observed that, in today's cluster, usage of Spark has dramatically increased. 
> This introduced a new issue that CPU/MEM utilization for single node may become imbalanced
due to Spark is generally more memory intensive. For example, after a node with capability
(48 cores, 192 GB memory) cannot satisfy a (1 core, 2 GB memory) request if current used resource
is (20 cores, 191 GB memory), with plenty of total available resource across the whole cluster.
> A thought for avoiding the situation is to introduce some strategy during scheduling.
> This JIRA proposes a delay-scheduling-alike approach to achieve better balance between
different type of resources on each node.
> The basic idea is consider dominant resource for each node, and when a scheduling opportunity
on a particular node is offered to a resource request, better make sure the allocation is
changing dominant resource of the node, or, in worst case, allocate at once when number of
offered scheduling opportunities exceeds a certain number.
> With YARN SLS and a simulation file with hybrid workload (MR+Spark), the approach improved
cluster resource usage by nearly 5%. And after deployed to production, we observed a 8% improvement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message