hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From João Paulo Forny <jpfo...@gmail.com>
Subject Re: setting maximum mapper concurrently running
Date Tue, 27 May 2014 03:34:58 GMT
The number of map and reduce slots on each TaskTracker node is controlled
by the *mapreduce.tasktracker.map.tasks.maximum*and
*mapreduce.tasktracker.reduce.tasks.maximum* Hadoop properties in the
mapred-site.xml file. If you change these settings, restart all of the
TaskTracker nodes.

I guess you can't change these settings for a specific job through a -D
parameter, since you'll need to restart the tasktracker.

2014-05-26 21:52 GMT-03:00 Du Lam <delim123456@gmail.com>:

> is there any setting that can set on run time of job for maximum mapper
> concurrently running ?
> i know there is a jobtracker level parameter that can be set, but that
> will be global parameter for every job.  Is it possible to set per job ?

View raw message