hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sunil G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6191) CapacityScheduler preemption by container priority can be problematic for MapReduce
Date Thu, 16 Feb 2017 16:57:41 GMT

    [ https://issues.apache.org/jira/browse/YARN-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870275#comment-15870275
] 

Sunil G commented on YARN-6191:
-------------------------------

Hi [~jlowe]

During inter-queue preemption improvement time, there were a bunch of thoughts regarding a
plugin-policy to select containers from an app for preemption. 
Now we do this based on container priority. Few more good params were
- % of work completed
- time remaining to finish a container
- locality of preempted container (Whether this will help the demanding queue's app for better
placement)
- +type of container+ as discussed here (map/reduce is better to preempt)

However all or some of these may not be available always or it may not well suit for a given
usecase. An idea of having a *pre-computed preemption cost* per container may be a good idea.
And priority of container could attribute to that cost, and other params as well (if configured).

> CapacityScheduler preemption by container priority can be problematic for MapReduce
> -----------------------------------------------------------------------------------
>
>                 Key: YARN-6191
>                 URL: https://issues.apache.org/jira/browse/YARN-6191
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Jason Lowe
>
> A MapReduce job with thousands of reducers and just a couple of maps left to go was running
in a preemptable queue.  Periodically other queues would get busy and the RM would preempt
some resources from the job, but it _always_ picked the job's map tasks first because they
use the lowest priority containers.  Even though the reducers had a shorter running time,
most were spared but the maps were always shot.  Since the map tasks ran for a longer time
than the preemption period, the job was in a perpetual preemption loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message