hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matei Zaharia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-961) ResourceAwareLoadManager to dynamically decide new tasks based on current CPU/memory load on TaskTracker(s)
Date Sun, 01 Nov 2009 23:54:59 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12772390#action_12772390
] 

Matei Zaharia commented on MAPREDUCE-961:
-----------------------------------------

Hi Scott and Dhruba,

I've looked at the patch a little bit and have a few comments:
# I agree with Dhruba that it would be good to have the option of running multiple Hadoop
clusters in parallel. It's also good design to allow the metrics data to be consumed by multiple
sources.
# In MemBasedLoadManager.canLaunchTask, you are returning true in some cases and saying that
this is "equivalent to the case of using only CapBasedLoadManager". How is that happening?
I think you would need to return super.canLaunchTask(...), not true. The Fair Scheduler itself
doesn't look at slot counts.
# It might be useful to use the max map slots / max reduce slots settings as upper bounds
on the total number of tasks on each node, to limit the number of processes launched. In this
case an administrator could configure the slots higher (e.g. 20 map slots and 10 reduce slots),
and the node utilization would be used to determine when fewer than this number of tasks should
be launched. Otherwise, a job with very low-utilization tasks could cause hundreds of processes
to be launched on each node.
# Have you thought in detail about how the MemBasedLoadManager will work when the scheduler
tries to launch multiple tasks per heartbeat (part of MAPREDUCE-706)? I think there are two
questions:
#* First, you will need to cap the number of tasks launched per heartbeat based on free memory
on the node, so that we don't end up launching too many tasks and overcommitting memory. One
way to do this might be to count tasks we schedule against the free memory on the node, and
conservatively estimate them to each use 2 GB or something (admin-configurable).
#* Second, it's important to launch both reduces and maps if both types of tasks are available.
The current multiple-task-per-heartbeat code in MAPREDUCE-706 (and in all the other schedulers
as far as I know) will first try to launch map tasks until canLaunchTask(TaskType.MAP) returns
false (or until there are no pending map tasks), and will the look for pending reduce tasks.
With the current MemBasedLoadManager, this would starve reduces whenever there are pending
maps. It would be better to alternate between the two task types if both are available.

> ResourceAwareLoadManager to dynamically decide new tasks based on current CPU/memory
load on TaskTracker(s)
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-961
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-961
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>          Components: contrib/fair-share
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: HIVE-961.patch, MAPREDUCE-961-v2.patch
>
>
> Design and develop a ResouceAwareLoadManager for the FairShare scheduler that dynamically
decides how many maps/reduces to run on a particular machine based on the CPU/Memory/diskIO/network
usage in that machine.  The amount of resources currently used on each task tracker is being
fed into the ResourceAwareLoadManager in real-time via an entity that is external to Hadoop.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message