hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinod K V (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-4035) Modify the capacity scheduler (HADOOP-3445) to schedule tasks based on memory requirements and task trackers free memory
Date Mon, 06 Oct 2008 09:53:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-4035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Vinod K V updated HADOOP-4035:

    Attachment: HADOOP-4035-20081006.txt

Attaching a patch.
- Changed CapacityTaskScheduler and the default scheduler to accept highRAM jobs. The cluster
will be blocked till the job at the head of the queue is served. This is according to the
above proposal.

- Modified TaskTrackerStatus to report free memory for map tasks and for reduce tasks separately,
and the total memory available to the TT is distributed between map tasks and reduce tasks
in the ratio of their slots. This is needed because we don't want map tasks to use memory
allocated to reduce tasks and vice versa.

* Jobs that cannot be run on any TT will be killed on the first heartbeat of any TT.
    ** Jobs cannot run on any TT for two reasons - 1) no TT in the cluster can serve the job's
tasks because of the job's very high memory requirements, 2) there were TTs in the cluster
that could run the job, but after the job has started, ALL of these TTs have gone down. We
need to kill these jobs because they would otherwise block the whole cluster for ever.
    ** Maximum size of job that is allowed in the cluster is determined by going through the
list of all TTs alive, and seeing the biggest job size that can be supported. Note that this
is done on every time assignedTasks is called (i.e. there's a free slot on a TT), but this
can be bettered if we have a TaskTrackerListener interface which will tell us precisely when
new TTs get added to the cluster or when old TTs expire.

- Added test-cases for both schedulers testing high-RAM job requirements.
- Changed CapacityTaskScheduler.TaskSchedulingMgr.Type to an enum instead of a string. It's
more convenient this way.
- Changed a couple of log statements so that they are more helpful in debugging.

TestHighRAMJobs needs a major rewrite because of the separate free memory values for map and
reduce tasks. It doesn't even compile now; will work on that. Negative values for memory on
TT or in job may adversely effect the working of this patch. Will investigate on that and
see if it needs some fix. This patch can be reviewed irrespective of these two (side) issues,
they will add code but won't change what is already done by the patch.

> Modify the capacity scheduler (HADOOP-3445) to schedule tasks based on memory requirements
and task trackers free memory
> ------------------------------------------------------------------------------------------------------------------------
>                 Key: HADOOP-4035
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4035
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/capacity-sched
>    Affects Versions: 0.19.0
>            Reporter: Hemanth Yamijala
>            Assignee: Vinod K V
>            Priority: Blocker
>             Fix For: 0.19.0
>         Attachments: 4035.1.patch, HADOOP-4035-20080918.1.txt, HADOOP-4035-20081006.txt
> HADOOP-3759 introduced configuration variables that can be used to specify memory requirements
for jobs, and also modified the tasktrackers to report their free memory. The capacity scheduler
in HADOOP-3445 should schedule tasks based on these parameters. A task that is scheduled on
a TT that uses more than the default amount of memory per slot can be viewed as effectively
using more than one slot, as it would decrease the amount of free memory on the TT by more
than the default amount while it runs. The scheduler should make the used capacity account
for this additional usage while enforcing limits, etc.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message