hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Gray (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5170) Set max map/reduce tasks on a per-job basis, either per-node or cluster-wide
Date Fri, 03 Jul 2009 18:47:47 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12727093#action_12727093
] 

Jonathan Gray commented on HADOOP-5170:
---------------------------------------

So pooling is just a one-time thing when I submit the job?  It's not something that persists
and I submit things into?

I'm a big consumer of MR but have been on a need-to-know basis with respect to these things.
 I guess I now need to know.  Again, part of what I liked about this issue/solution was that
it's powerful, accessible, and easy to understand.  I understand the concerns of larger users
and the need to support this... And I would again ask if we could stick it into a corner somewhere
so that it's still easy to access but does not get in the way of everything else.

Otherwise, what I'd be interested in is an explanation / example of how users of this patch
might accomplish the same types of things.  For example, only allowing a particular job to
use one task per node (or even total tasks at a time = total nodes).  And at the same time,
having other jobs that I allow 10s of tasks per node.  I'm not following how that would work.

> Set max map/reduce tasks on a per-job basis, either per-node or cluster-wide
> ----------------------------------------------------------------------------
>
>                 Key: HADOOP-5170
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5170
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: mapred
>            Reporter: Jonathan Gray
>            Assignee: Matei Zaharia
>             Fix For: 0.21.0
>
>         Attachments: HADOOP-5170-tasklimits-v3-0.18.3.patch, tasklimits-v2.patch, tasklimits-v3-0.19.patch,
tasklimits-v3.patch, tasklimits-v4-20.patch, tasklimits-v4.patch, tasklimits.patch
>
>
> There are a number of use cases for being able to do this.  The focus of this jira should
be on finding what would be the simplest to implement that would satisfy the most use cases.
> This could be implemented as either a per-node maximum or a cluster-wide maximum.  It
seems that for most uses, the former is preferable however either would fulfill the requirements
of this jira.
> Some of the reasons for allowing this feature (mine and from others on list):
> - I have some very large CPU-bound jobs.  I am forced to keep the max map/node limit
at 2 or 3 (on a 4 core node) so that I do not starve the Datanode and Regionserver.  I have
other jobs that are network latency bound and would like to be able to run high numbers of
them concurrently on each node.  Though I can thread some jobs, there are some use cases that
are difficult to thread (scanning from hbase) and there's significant complexity added to
the job rather than letting hadoop handle the concurrency.
> - Poor assignment of tasks to nodes creates some situations where you have multiple reducers
on a single node but other nodes that received none.  A limit of 1 reducer per node for that
job would prevent that from happening. (only works with per-node limit)
> - Poor mans MR job virtualization.  Since we can limit a jobs resources, this gives much
more control in allocating and dividing up resources of a large cluster.  (makes most sense
w/ cluster-wide limit)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message