hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matei Zaharia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-2205) FairScheduler should only preempt tasks for pools/jobs that are up next for scheduling
Date Tue, 30 Nov 2010 20:52:13 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965430#action_12965430

Matei Zaharia commented on MAPREDUCE-2205:

What I was trying to say is not that we shouldn't look at the job ordering, but that we have
to prioritize giving tasks to jobs that have required preemption. If we don't do this, then
there's no guarantee that the jobs at the head of the ordering L will be the ones that actually
require preemption. It's true that if we estimate N correctly, then we know that *eventually*
jobs will get to launch tasks before the next preemption interval, but even that is not as
good a guarantee as saying that as soon as your timeout passes, we will kill some tasks and
give those slots directly to you.

I think the scheme I proposed above is the simplest way to achieve this without requiring
any sort of estimation of heuristics. It's exactly the same logic we have before, except that
in assignTasks, we sort jobs first by whether they need preemption, and then by the fair share
comparator. We don't need to change the comparator in any way, just to add a bit of extra
logic to prioritize these needy jobs. We are also guaranteed this way that preempted tasks
go directly to a job that needed preemption, and that you get your slots one or two heartbeats
after your preemption timeout expires.

> FairScheduler should only preempt tasks for pools/jobs that are up next for scheduling
> --------------------------------------------------------------------------------------
>                 Key: MAPREDUCE-2205
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2205
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/fair-share
>            Reporter: Joydeep Sen Sarma
> We have hit a problem with the preemption implementation in the FairScheduler where the
following happens:
> # job X runs short of fair share or min share and requests/causes N tasks to be preempted
> # when slots are then scheduled - tasks from some other job are actually scheduled
> # after preemption_interval has passed, job X finds it's still underscheduled and requests
preemption. goto 1.
> This has caused widespread preemption of tasks and the cluster going from high utilization
to low utilization in a few minutes.
> Some of the problems are specific to our internal version of hadoop (still 0.20 and doesn't
have the hierarchical FairScheduler) - but i think the issue here is generic (just took a
look at the trunk assignTasks and tasksToPreempt routines). The basic problem seems to be
that the logic of assignTasks+FairShareComparator is not consistent with the logic in tasksToPreempt().
The latter can choose to preempt tasks on behalf of jobs that may not be first up for scheduling
based on the FairComparator. Understanding whether these two separate pieces of logic are
consistent and keeping it that way is difficult.
> It seems that a much safer preemption implementation is to walk the jobs in the order
they would be scheduled on the next heartbeat - and only preempt for jobs that are at the
head of this sorted queue. In MAPREDUCE-2048 - we have already introduced a pre-sorted list
of jobs ordered by current scheduling priority. It seems much easier to preempt only jobs
at the head of this sorted list.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message