hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Venu Gopala Rao <venugopalarao.ko...@huawei.com>
Subject Quincy Fair Scheduler Vs Hadoop Fair Scheduler
Date Fri, 17 Jun 2011 13:46:12 GMT
Hi All,

 

   I have come across a Fair Scheduler published by Microsoft known as
Quincy Fair SCheduler. In this they compare Hadoop Fair Scheduler with
Quincy and say the Hadoop Scheduler has the following problems

 

1)  Sticky Slots - One drawback of simple fair scheduling is that it is
damaging to locality. Consider the steady state in which each job is
occupying exactly its allocated quota of computers. Whenever a task from job
j completes on computer m, j becomes unblocked but all of the other jobs in
the system remain blocked. Consequently m will be assigned to one of j's
tasks again: this is referred to as the "sticky slot" problem.

 

2) Fair Scheduler may not be able to utilize the Data locality to maximum
possible extent.

 

Does these problems get solved in the 0.21 or Next Gen Map Reduce?

 

Regards

Venu

 

 

 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message