hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Bockelman <bbock...@cse.unl.edu>
Subject Re: Regarding Job tracker
Date Wed, 28 Apr 2010 13:39:37 GMT

On Apr 28, 2010, at 5:04 AM, Steve Loughran wrote:

> prajyot bankade wrote:
>> Hello Everyone,
>> I have just started reading about hadoop job tracker. In one book I read
>> that there is only one job tracker who is responsible to distribute task to
>> worker system. Please make me right if i say some thing wrong.
>> I have few questions,
>> why there is only one job tracker?
> to provide a single place to make scheduling decisions
(thread hijack)

Why is this an advantage? (I mean, I know it's an advantage in terms of the current architecture...
just indulging in some blue-sky thinking here).

One of the projects I work with is the Condor Project out of Madison:


who have been building a distributed computing infrastructure for about 20 years.  Here is
one of my favorite "overview" papers of theirs:

http://www.cs.wisc.edu/condor/doc/condor-practice.pdf  (my favorite is sections 4, 5, and

They have gotten lots of mileage out of breaking the scheduling and the resource provision
into two different components.  Having multiple jobtrackers would be very advantageous if
it didn't require you to partition your pool.

One use case would be to separate out the "production work" from "research activities".  You
could have a 'production jobtracker' which is accessible to a small number of users and has
"known good", pre-approved, business-critical workflows and a 'research jobtracker' which
more folks are allowed to use without pre-approved workflows.  This way, if a researcher accidentally
crashes the jobtracker in the middle of the night, your business-critical work continues.

There's plenty of merit to the idea and worth thinking about.


View raw message