hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6191) CapacityScheduler preemption by container priority can be problematic for MapReduce
Date Thu, 16 Feb 2017 15:28:42 GMT

    [ https://issues.apache.org/jira/browse/YARN-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15870131#comment-15870131
] 

Jason Lowe commented on YARN-6191:
----------------------------------

Thanks, Chris!  Having the AM react to the preemption message in the heartbeat will definitely
help a lot for common cases, even if it doesn't do any work-conserving logic and just kills
the reducers.

However there's still an issue because the preemption message is too general.  For example,
if the message says "going to preempt 60GB of resources" and the AM kills 10 reducers that
are 6GB each on 6 different nodes, the RM can still kill the maps because the RM needed 60GB
of contiguous resources.  Fixing that requires the preemption message to be more expressive/specific
so the AM knows that its actions will indeed prevent the preemption of other containers.

I still wonder about the logic of preferring lower container priorities regardless of how
long they've been running.  I'm not sure container priority always translates well to how
important a container is to the application, and we might be better served by preferring to
minimize total lost work regardless of container priority.

> CapacityScheduler preemption by container priority can be problematic for MapReduce
> -----------------------------------------------------------------------------------
>
>                 Key: YARN-6191
>                 URL: https://issues.apache.org/jira/browse/YARN-6191
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: capacityscheduler
>            Reporter: Jason Lowe
>
> A MapReduce job with thousands of reducers and just a couple of maps left to go was running
in a preemptable queue.  Periodically other queues would get busy and the RM would preempt
some resources from the job, but it _always_ picked the job's map tasks first because they
use the lowest priority containers.  Even though the reducers had a shorter running time,
most were spared but the maps were always shot.  Since the map tasks ran for a longer time
than the preemption period, the job was in a perpetual preemption loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message