hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Carlo Curino (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6344) Rethinking OFF_SWITCH locality in CapacityScheduler
Date Thu, 16 Mar 2017 05:05:41 GMT

    [ https://issues.apache.org/jira/browse/YARN-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927509#comment-15927509

Carlo Curino commented on YARN-6344:

I agree with what [~kkaranasos] said. In our clusters, the localityWaitFactor (as it is today)
it almost never leads to a reasonable behavior. For example, in a 5k nodes clusters, a very
large job with 10k outstanding asks will only get to wait 2 (or up to 4) scheduling opportunities
before giving up on the rack and going for off-switch. The change  [~kkaranasos] is proposing
looked reasonable (he will share the code soon). We have been flighting it in tests clusters
with good results, and will be running it in prod in the coming days. 

I think we could probably retain the current behavior if rack-locality-delay is not specified,
but in most scenarios is equivalent to say "we don't care about locality unless the job is
many times bigger than the cluster" in which case, we might just remove a bunch of code from
RM. Am I missing something?

> Rethinking OFF_SWITCH locality in CapacityScheduler
> ---------------------------------------------------
>                 Key: YARN-6344
>                 URL: https://issues.apache.org/jira/browse/YARN-6344
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Konstantinos Karanasos
> When relaxing locality from node to rack, the {{node-locality-parameter}} is used: when
scheduling opportunities for a scheduler key are more than the value of this parameter, we
relax locality and try to assign the container to a node in the corresponding rack.
> On the other hand, when relaxing locality to off-switch (i.e., assign the container anywhere
in the cluster), we are using a {{localityWaitFactor}}, which is computed based on the number
of outstanding requests for a specific scheduler key, which is divided by the size of the
> In case of applications that request containers in big batches (e.g., traditional MR
jobs), and for relatively small clusters, the localityWaitFactor does not affect relaxing
locality much.
> However, in case of applications that request containers in small batches, this load
factor takes a very small value, which leads to assigning off-switch containers too soon.
This situation is even more pronounced in big clusters.
> For example, if an application requests only one container per request, the locality
will be relaxed after a single missed scheduling opportunity.
> The purpose of this JIRA is to rethink the way we are relaxing locality for off-switch

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org

View raw message