hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <tdunn...@veoh.com>
Subject Re: Questions about the MapReduce libraries and job schedulers inside JobTracker and JobClient running on Hadoop
Date Fri, 15 Feb 2008 21:54:18 GMT

Core-user is the right place for this question.

Your description is mostly correct.  Jobs don't necessarily go to all of
your boxes in the cluster, but they may.

Non-uniform machine specs are a bit of a problem that is being (has been?)
addressed by allowing each machine to have a slightly different
hadoop-site.xml file.  That would allow different settings for storage
configuration and number of processes to run.

Even without that, you can level the load a bit by simply running more jobs
on the weak machines than you would otherwise prefer.  Most map reduce
programs are pretty light on memory usage so all that happens is that you
get less throughput on the weak machines.  Since there are normally more map
tasks than cores, this is no big deal; slow machines get fewer tasks and
toward the end of the job, their tasks are even replicated on other machines
in case they can be done more quickly.


On 2/15/08 1:25 PM, "Andrew_Lee@trendmicro.com" <Andrew_Lee@trendmicro.com>
wrote:

> 
> Hello,
> 
> My first time posting this in the news group.    My question sounds more like
> a MapReduce question
> instead of Hadoop HDFS itself.
> 
> To my understanding, the JobClient will submit all Mapper and Reduce class
> in a uniform way to the cluster?  Can I assume this is more like a uniform
> scheduler 
> for all the task?
> 
> For example, if I have a 100 node cluster, 1 master (namenode), 99 slaves
> (datanodes).
> When I do 
> "JobClient.runJob(jconf)"
> the JobClient will uniformly distributes all Mapper and Reduce class to all 99
> nodes.
> 
> In the slaves, they will all have the same hadoop-site.xml and
> hadoop-default.xml.
> Here comes the main concern, what if some of the nodes don't have the same
> hardware spec such as
> memory or CPU speed?  E.g. different batch purchase and repairment overtime
> that causes this.
> 
> Is there any way that the JobClient can be aware of this and submit different
> number of tasks to different slaves
> during start-up?
> For example, for some slaves, it has 16 cores CPU instead of 8 cores.  The
> problem I see here is that
> for the 16 cores, only 8 cores are used.
> 
> P.S. I'm looking into the JobClient source code and JobProfile/JobTracker to
> see if this can be done.
> But not sure if I am on the right track.
> 
> If this topic is more likely to be in the core-dev@hadoop.apache.org, please
> let me know.  I'll send another one to that news group.
> 
> Regards,
> -Andy
> 
> TREND MICRO EMAIL NOTICE
> The information contained in this email and any attachments is confidential
> and may be subject to copyright or other intellectual property protection. If
> you are not the intended recipient, you are not authorized to use or disclose
> this information, and we request that you notify us by reply mail or telephone
> and delete the original message from your mail system.


Mime
View raw message