hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "David Parks" <davidpark...@yahoo.com>
Subject Using FairScheduler to limit # of tasks
Date Mon, 13 May 2013 11:21:53 GMT
Can I use the FairScheduler to limit the number of map/reduce tasks directly
from the job configuration? E.g. I have 1 job that I know should run a more
limited # of map/reduce tasks than is set as the default, I want to
configure a queue with a limited # of map/reduce tasks, but only apply it to
that job, I don't want to deploy this queue configuration to the cluster.

 

Assuming the above answer is 'yes', if I were to limit the # of map tasks to
10 in a cluster of 10 nodes, would the fair scheduler tend to distribute
those 10 map tasks evenly across the nodes (assuming a cluster that's
otherwise unused at the moment), or would it be prone to over-loading a
single node just because those are the first open slots it sees?

 

David

 


Mime
View raw message