hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: max number of map/reduce per node
Date Mon, 11 Feb 2013 11:54:13 GMT
Hi,

My reply inline.

On Mon, Feb 11, 2013 at 5:15 PM, Oleg Ruchovets <oruchovets@gmail.com> wrote:
> Hi
>    I found that my job runs with such parameters:
> mapred.tasktracker.map.tasks.maximum    4
> mapred.tasktracker.reduce.tasks.maximum    2
>
>    I try to change these parameters from my java code
>
>     Properties properties = new Properties();
>     properties.put("mapred.tasktracker.map.tasks.maximum" , "8");
>     properties.put("mapred.tasktracker.reduce.tasks.maximum" , "4");

These properties are a per-tasktracker configuration, not applicable
or read from clients.

Also, if you're tweaking client-end properties, using the Java
Properties class is not the right way to go about it. See
Configuration API:
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/conf/Configuration.html

> But executing the job I didn't get updated values of these parameters , it
> remains:
>
> mapred.tasktracker.map.tasks.maximum 4
> mapred.tasktracker.reduce.tasks.maximum 2
>
>
> Should I change the parameters on hadoop XML configuration files?

Yes, as these are per *tasktracker* properties, not client ones.

> Please advice.
>
>
>
>
>
>



--
Harsh J

Mime
View raw message