hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Markus Jelsma <markus.jel...@openindex.io>
Subject Variable mapreduce.tasktracker.*.tasks.maximum per job
Date Mon, 19 Dec 2011 23:02:39 GMT
Hi,

We have many different jobs running on a 0.22.0 cluster, each with its own 
memory consumption. Some jobs can easily be run with a large amount of *.tasks 
per job and others require much more memory and can only be run with a minimum 
number of tasks per node.

Is there any way to reconfigure a running cluster on a per job basis so we can 
set the heap size and number of mapper and reduce tasks per node? If not, we 
have to force all settings to a level that is right for the toughest jobs 
which will have a negative impact on simpler jobs.

Thoughts?
Thanks

Mime
View raw message