hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod K V (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4665) Add preemption to the fair scheduler
Date Wed, 10 Jun 2009 14:42:12 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12718099#action_12718099

Vinod K V commented on HADOOP-4665:

Matei, I have (re)started looking at the patch. The changes look good overall, except the
following points:
 - preemptionInterval variable is initialized to 30000 whereas the default value is 15000.
Shouldn't they be consistent?
- EagerTaskInitializationListener is not removed from the list of listeners in the terminate
- You seem to have missed one of my earlier points:
bq. The count tasksDueToFairShare seems to be calculated to see if full fair-share of slots
are allotted or not instead of the advertised half of fairshare. I think this is a mistake
as isStarvedForFairShare() is checking for half of fair-share. Or am I missing something?

The changes in test-cases and documentation seem to be huge. It'll take till tomorrow for
me to complete the review of test-cases and documentation. Thanks for the patience.

> Add preemption to the fair scheduler
> ------------------------------------
>                 Key: HADOOP-4665
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4665
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: contrib/fair-share
>            Reporter: Matei Zaharia
>            Assignee: Matei Zaharia
>             Fix For: 0.21.0
>         Attachments: fs-preemption-v0.patch, hadoop-4665-v1.patch, hadoop-4665-v1b.patch,
hadoop-4665-v2.patch, hadoop-4665-v3.patch, hadoop-4665-v4.patch, hadoop-4665-v5.patch, hadoop-4665-v6.patch,
hadoop-4665-v7.patch, hadoop-4665-v7b.patch
> Task preemption is necessary in a multi-user Hadoop cluster for two reasons: users might
submit long-running tasks by mistake (e.g. an infinite loop in a map program), or tasks may
be long due to having to process large amounts of data. The Fair Scheduler (HADOOP-3746) has
a concept of guaranteed capacity for certain queues, as well as a goal of providing good performance
for interactive jobs on average through fair sharing. Therefore, it will support preempting
under two conditions:
> 1) A job isn't getting its _guaranteed_ share of the cluster for at least T1 seconds.
> 2) A job is getting significantly less than its _fair_ share for T2 seconds (e.g. less
than half its share).
> T1 will be chosen smaller than T2 (and will be configurable per queue) to meet guarantees
quickly. T2 is meant as a last resort in case non-critical jobs in queues with no guaranteed
capacity are being starved.
> When deciding which tasks to kill to make room for the job, we will use the following
> - Look for tasks to kill only in jobs that have more than their fair share, ordering
these by deficit (most overscheduled jobs first).
> - For maps: kill tasks that have run for the least amount of time (limiting wasted time).
> - For reduces: similar to maps, but give extra preference for reduces in the copy phase
where there is not much map output per task (at Facebook, we have observed this to be the
main time we need preemption - when a job has a long map phase and its reducers are mostly
sitting idle and filling up slots).

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message