hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From S D <sd.codewarr...@gmail.com>
Subject Re: Controlling maximum # of tasks per node on per-job basis?
Date Sat, 14 Mar 2009 02:04:57 GMT
I ran into this problem as well and several people on this list provided a
helpful response: once the tasktracker starts, the maximum number of tasks
per node can not be changed. In my case, I've solved this challenge by
stopping and starting mapred (stop-mapred.sh, start-mapred.sh) between jobs.
There is a jira so this may be changed in the future:  jira HADOOP-5170 (
http://issues.apache.org/jira/browse/HADOOP-5170)

John

On Fri, Mar 13, 2009 at 9:47 PM, Stuart White <stuart.white1@gmail.com>wrote:

> My cluster nodes have 2 dual-core processors, so, in general, I want
> to configure my nodes with a maximum of 3 task processes executed per
> node at a time.
>
> But, for some jobs, my tasks load large amounts of memory, and I
> cannot fit 3 such tasks on a single node.  For these jobs, I'd like to
> enforce running a maximum of 1 task process per node at a time.
>
> I've tried to enforce this by setting
> mapred.tasktracker.map.tasks.maximum at runtime, but I see it has no
> effect, because this is a configuration for the TaskTracker, which is
> of course already running before my job starts.
>
> Is there no way to configure a maximum # of map tasks per node on a
> per-job basis?
>
> Thanks!
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message