hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun C Murthy (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-4667) Global scheduling in the Fair Scheduler
Date Fri, 16 Jan 2009 00:43:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12664342#action_12664342
] 

Arun C Murthy commented on HADOOP-4667:
---------------------------------------

bq. I set them to be times rather than number of tasktrackers so that they are very easy for
an administrator to understand (if they want some kind of guarantee about response time) and
so that you don't need to take into account number of nodes in your cluster to decide what
is a reasonable number.

Hmm... sorry I should have been more clear. 

I propose we use a fraction of the cluster-size (i.e. total no. of tasktrackers in the sytem
via ClusterStatus) rather than time. For e.g. we could say that dataLocalWait is equivalent
to 100% of cluster size (which implies  1 round of heartbeats i.e. 5s for 500 nodes, 10s for
1000 nodes etc.) and 200% of cluster-size (i.e. 10s for 500, 20s for 1000). This will keep
it relatively simple and tractable. Thoughts?

> Global scheduling in the Fair Scheduler
> ---------------------------------------
>
>                 Key: HADOOP-4667
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4667
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: contrib/fair-share
>            Reporter: Matei Zaharia
>         Attachments: fs-global-v0.patch
>
>
> The current schedulers in Hadoop all examine a single job on every heartbeat when choosing
which tasks to assign, choosing the job based on FIFO or fair sharing. There are inherent
limitations to this approach. For example, if the job at the front of the queue is small (e.g.
10 maps, in a cluster of 100 nodes), then on average it will launch only one local map on
the first 10 heartbeats while it is at the head of the queue. This leads to very poor locality
for small jobs. Instead, we need a more "global" view of scheduling that can look at multiple
jobs. To resolve the locality problem, we will use the following algorithm:
> - If the job at the head of the queue has no node-local task to launch, skip it and look
through other jobs.
> - If a job has waited at least T1 seconds while being skipped, also allow it to launch
rack-local tasks.
> - If a job has waited at least T2 > T1 seconds, also allow it to launch off-rack tasks.
> This algorithm improves locality while bounding the delay that any job experiences in
launching a task.
> It turns out that whether waiting is useful depends on how many tasks are left in the
job - the probability of getting a heartbeat from a node with a local task - and on whether
the job is CPU or IO bound. Thus there may be logic for removing the wait on the last few
tasks in the job.
> As a related issue, once we allow global scheduling, we can launch multiple tasks per
heartbeat, as in HADOOP-3136. The initial implementation of HADOOP-3136 adversely affected
performance because it only launched multiple tasks from the same job, but with the wait rule
above, we will only do this for jobs that are allowed to launch non-local tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message