hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <qwertyman...@gmail.com>
Subject Re: Hadoop - how exactly is a slot defined
Date Mon, 22 Nov 2010 18:11:22 GMT
Hi,

Answers inline.

On Mon, Nov 22, 2010 at 11:08 PM, Grandl Robert <rgrandl@yahoo.com> wrote:
> Thanks all for your comments.
>
> However, I still have some doubts.
>
> Basically I can control the number of map/reduce slots with
> mapred.tasktracker.map.tasks.maximum
> mapred.tasktracker.reduce.tasks.maximum
>
> but, it is possible to set different number of map/reduce slots for different slaves
?

Yes, this setting is 'tasktracker' specific, as the property name
goes. Each TaskTracker can have a different config to load from.

>
> For example If I am running in a heterogeneous environment, where each slave have different
configuration, it is possible to set different number of slots based on the specific machine
configurations ?

Yes, give each machine a unique value via its local copy of mapred-site.xml

> For the moment I observed that I can modify only on the master this parameters, therefore
all the nodes will run with same number of map/reduce slots careless of whatever resources(CPU,MEMORY)
offer each other.

Not really, your slave machine's config file (conf/mapred-site.xml)
needs to reflect the proper settings you need it to use for its
TaskTracker (DataNodes have specific configuration as well).

-- 
Harsh J
www.harshj.com

Mime
View raw message