hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6191) CapacityScheduler preemption by container priority can be problematic for MapReduce
Date Tue, 14 Feb 2017 21:39:42 GMT

    [ https://issues.apache.org/jira/browse/YARN-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15866746#comment-15866746
] 

Jason Lowe commented on YARN-6191:
----------------------------------

This is similar to the FairScheduler problem described in YARN-3054.  Since it always picks
on the lowest priority containers, for MapReduce that is always the maps.  Some jobs can get
preempted often enough to ever allow it to finish.

> CapacityScheduler preemption by container priority can be problematic for MapReduce
> -----------------------------------------------------------------------------------
>
>                 Key: YARN-6191
>                 URL: https://issues.apache.org/jira/browse/YARN-6191
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Jason Lowe
>
> A MapReduce job with thousands of reducers and just a couple of maps left to go was running
in a preemptable queue.  Periodically other queues would get busy and the RM would preempt
some resources from the job, but it _always_ picked the job's map tasks first because they
use the lowest priority containers.  Even though the reducers had a shorter running time,
most were spared but the maps were always shot.  Since the map tasks ran for a longer time
than the preemption period, the job was in a perpetual preemption loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message