hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andy Li" <annndy....@gmail.com>
Subject Re: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop
Date Sat, 16 Feb 2008 05:17:59 GMT
Thanks for both inputs.  My question actually focus more on what Vivek has
mentioned.

I would like to work on the JobClient to see how it submits jobs to
different file system and
slaves in the same Hadoop cluster.

Not sure if there is a complete document to explain the scheduler underneath
Hadoop,
if not, I'll wrap up what I know and study from the source code and submit
it to the community
once it is done.  Review and comments are welcome.

For the code, I couldn't find JobInProgress from the API index.  Could
anyone provide me
a pointer to this?  Thanks.

On Fri, Feb 15, 2008 at 3:01 PM, Vivek Ratan <vivekr@yahoo-inc.com> wrote:

> I read Andy's question a little differently. For a given job, the
> JobTracker
> decides which tasks go to which TaskTracker (the TTs ask for a task to run
> and the JT decides which task is the most appropriate). Currently, the JT
> favors a task whose input data is on the same host as the TT (if there are
> more than one such tasks, it picks the one with the largest input size).
> It
> also looks at failed tasks and certain other criteria. This is very basic
> scheduling and there is a lot of scope for improvement. There currently is
> a
> proposal to support rack awareness, so that if the JT can't find a task
> whose input data is on the same host as the TT, it looks for a task whose
> data is on the same rack.
>
> You can clearly get more ambitious with your scheduling algorithm. As you
> mention, you could use other criteria for scheduling a task: available CPU
> or memory, for example. You could assign tasks to hosts that are the most
> 'free', or aim to distribute tasks across racks, or try some other load
> balancing techniques. I believe there are a few discussions on these
> methods
> on Jira, but I don't think there's anything concrete yet.
>
> BTW, the code that decides what task to run is primarily in
> JobInProgress::findNewTask().
>
>
> -----Original Message-----
> From: Ted Dunning [mailto:tdunning@veoh.com]
> Sent: Friday, February 15, 2008 1:54 PM
> To: core-user@hadoop.apache.org
> Subject: Re: Questions about the MapReduce libraries and job schedulers
> inside JobTracker and JobClient running on Hadoop
>
>
> Core-user is the right place for this question.
>
> Your description is mostly correct.  Jobs don't necessarily go to all of
> your boxes in the cluster, but they may.
>
> Non-uniform machine specs are a bit of a problem that is being (has been?)
> addressed by allowing each machine to have a slightly different
> hadoop-site.xml file.  That would allow different settings for storage
> configuration and number of processes to run.
>
> Even without that, you can level the load a bit by simply running more
> jobs
> on the weak machines than you would otherwise prefer.  Most map reduce
> programs are pretty light on memory usage so all that happens is that you
> get less throughput on the weak machines.  Since there are normally more
> map
> tasks than cores, this is no big deal; slow machines get fewer tasks and
> toward the end of the job, their tasks are even replicated on other
> machines
> in case they can be done more quickly.
>
>
> On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" <Andrew_Lee@trendmicro.com
> >
> wrote:
>
> >
> > Hello,
> >
> > My first time posting this in the news group.    My question sounds more
> like
> > a MapReduce question
> > instead of Hadoop HDFS itself.
> >
> > To my understanding, the JobClient will submit all Mapper and Reduce
> > class in a uniform way to the cluster?  Can I assume this is more like
> > a uniform scheduler for all the task?
> >
> > For example, if I have a 100 node cluster, 1 master (namenode), 99
> > slaves (datanodes).
> > When I do
> > "JobClient.runJob(jconf)"
> > the JobClient will uniformly distributes all Mapper and Reduce class
> > to all 99 nodes.
> >
> > In the slaves, they will all have the same hadoop-site.xml and
> > hadoop-default.xml.
> > Here comes the main concern, what if some of the nodes don't have the
> > same hardware spec such as memory or CPU speed?  E.g. different batch
> > purchase and repairment overtime that causes this.
> >
> > Is there any way that the JobClient can be aware of this and submit
> > different number of tasks to different slaves during start-up?
> > For example, for some slaves, it has 16 cores CPU instead of 8 cores.
> > The problem I see here is that for the 16 cores, only 8 cores are
> > used.
> >
> > P.S. I'm looking into the JobClient source code and
> > JobProfile/JobTracker to see if this can be done.
> > But not sure if I am on the right track.
> >
> > If this topic is more likely to be in the core-dev@hadoop.apache.org,
> > please let me know.  I'll send another one to that news group.
> >
> > Regards,
> > -Andy
> >
> > TREND MICRO EMAIL NOTICE
> > The information contained in this email and any attachments is
> > confidential and may be subject to copyright or other intellectual
> > property protection. If you are not the intended recipient, you are
> > not authorized to use or disclose this information, and we request
> > that you notify us by reply mail or telephone and delete the original
> message from your mail system.
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message