hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sandy Ryza (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-1959) Fix headroom calculation in Fair Scheduler
Date Fri, 18 Apr 2014 20:45:16 GMT

    [ https://issues.apache.org/jira/browse/YARN-1959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13974456#comment-13974456
] 

Sandy Ryza commented on YARN-1959:
----------------------------------

One thing I don't understand from reading the Capacity Scheduler headroom calculation is how
it prevents apps from starving when a max capacity isn't set.  It's defined as min((userLimit,
queue-max-cap) - consumed).  If no max capacities are set and two users are running in a queue,
each taking up half the queue's capacity, the headroom for each user will be half the queue's
capacity.  If the cluster is saturated to the extent that the queue's usage can't go above
its capacity, the headroom is being vastly overreported.

[~jlowe], any insight on this?

> Fix headroom calculation in Fair Scheduler
> ------------------------------------------
>
>                 Key: YARN-1959
>                 URL: https://issues.apache.org/jira/browse/YARN-1959
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Sandy Ryza
>
> The Fair Scheduler currently always sets the headroom to 0.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message