hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1603) Add a plugin class for the TaskTracker to determine available slots
Date Tue, 16 Mar 2010 22:06:27 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846164#action_12846164

Allen Wittenauer commented on MAPREDUCE-1603:

I like this idea a lot.  It starts to crack open the door to much more advanced scheduling.
 For example, it'd be great to be able to pass up to the scheduling system if a machine is
32-bit or 64-bit, Windows or some Unix flavor, etc.  This means at some other future point,
jobs could request a particular environment. 

> Add a plugin class for the TaskTracker to determine available slots
> -------------------------------------------------------------------
>                 Key: MAPREDUCE-1603
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1603
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>          Components: tasktracker
>    Affects Versions: 0.22.0
>            Reporter: Steve Loughran
>            Priority: Minor
> Currently the #of available map and reduce slots is determined by the configuration.
MAPREDUCE-922 has proposed working things out automatically, but that is going to depend a
lot on the specific tasks -hard to get right for everyone.
> There is a Hadoop cluster near me that would like to use CPU time from other machines
in the room, machines which cannot offer storage, but which will have spare CPU time when
they aren't running code scheduled with a grid scheduler. The nodes could run a TT which would
report a dynamic number of slots, the number depending upon the current grid workload. 
> I propose we add a plugin point here, so that different people can develop plugin classes
that determine the amount of available slots based on workload, RAM, CPU, power budget, thermal
parameters, etc. Lots of space for customisation and improvement. And by having it as a plugin:
people get to integrate with whatever datacentre schedulers they have without Hadoop itself
needing to be altered: the base implementation would be as today: subtract the number of active
map and reduce slots from the configured values, push that out. 

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message