hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Neo Anderson <javadeveloper...@yahoo.co.uk>
Subject Fair scheduler fairness question
Date Wed, 10 Mar 2010 15:38:12 GMT
I am learning how fair scheduler manage the jobs to allow each job share resource over time;
but don't know if my understanding is correct or not. 

My scenario is that I have 3 data nodes and the cluster is configured using fair scheduler
with three pools launched (e.g. A, B, C). Each pool is configured with '<maxRunningJobs>1</maxRunningJobs>.'
Now the clients try to submit 4 jobs (e.g. submitjob()) to 3 differt pools. For instance,


the first job is submitted to pool A
the second job is submitted to pool B
the third job is submitted to pool B
the fourth job is submitted to pool C

So I expect that the first 3 jobs will occupy the free slots (the slots should be fool now.)
Then the fourth job is submitted. But since the slots are full, and the fourth job should
also have a slot executing its job; therefore, the third job will be terminated (or kill)
so that the fourth job can be launched. 

Is my scenario correct? 
And if I am right, is there any key word searchable in the log to observe such activites (the
job that is being killed e.g. the third job)?

Thanks for help. 
I apprecaite any advice.








      


Mime
View raw message