hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yoram Arnon" <yar...@yahoo-inc.com>
Subject RE: Task type priorities during scheduling ?
Date Tue, 25 Jul 2006 21:46:58 GMT
There is, actually, support for multiple jobs. Maps are scheduled separately
from reduces, and when the current job can not saturate the cluster then the
next job's tasks get scheduled, and the next. I've seen several small jobs
execute concurrently on my largish clusters.
Reduces for a given job won't get scheduled before maps of that job are
scheduled, but that makes perfect sense - they'll have no work to do. Once
map tasks start getting scheduled though, if there are available reduce
slots, they'll get assigned reduce tasks.


-----Original Message-----
From: Paul Sutter [mailto:sutter@gmail.com] 
Sent: Tuesday, July 25, 2006 11:01 AM
To: hadoop-user@lucene.apache.org
Subject: Re: Task type priorities during scheduling ?

First, It matters in the case of concurrent jobs. If you submit a 20
minute job while a 20 hour job is running, it would be nice if the
reducers for the 20 minute job could get a chance to run before the 20
hour job's mappers have all finished. So even without a throughput
improvement, you have an important capability (although it may require
another minor tweak or two to make possible).

Secondarily, we often have stragglers, where one mapper runs slower
than the others. When this happens, we end up with a largely idle
cluster for as long as an hour. In cases like these, good support for
concurrent jobs _would_ improve throughput.


On 7/25/06, Doug Cutting <cutting@apache.org> wrote:
> Paul Sutter wrote:
> > it should be possible to have lots of tasks in the shuffle phase
> > (mostly, sitting around waiting for mappers to run), but only have
> > "about" one actual reduce phase running per cpu (or whatever works for
> > each of our apps) that gets enough memory for a sorter, does
> > substantial computation, etc.
> Ah, now I see your point, although I don't see how this would improve
> overall throughput.  In most cases, the optimal configuration is for the
> total number of reduce tasks to be roughly the total number of reduces
> that can run at once.  So there is no queue of waiting reduce tasks to
> schedule.
> Doug

View raw message