hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Segel, Mike" <mse...@navteq.com>
Subject RE: Map Reduce in heterogeneous environ..
Date Thu, 11 Mar 2010 12:53:35 GMT
I agree that this may not be a problem with Hadoop, but more of an issue of how to manage
So what are you suggesting?

If I understand your comments, would the following be a good idea?

In a common directory, we have a hadoop.conf directory which contain all of the configuration
information for a node.
We then push the common folder out to each machine in /etc/hadoop-0.20/conf.devcloud and then
create the symbolic link so that /etc/hadoop-0.20/conf points to this folder. (devcloud because
it's a development cloud)

From a central point, we can manage multiple clouds and their configurations. devCloud for
development, prodCloud for production, etc ... 

If we were to have a 'non-homogenous' cloud, would you then have us create a hadoop.conf.machineTypeA
for one type of node and then hadoop.conf.machineTypeB for another type of node?

Then push them out respective of their machine types?

I mean this would be trivial to do, but just because we can do it, is it the smart thing to

Managing a cloud is like building a data warehouse, if you don't do it right from the beginning,
you end up spending a lot more money, effort and time, fixing your mistakes. 



-----Original Message-----
From: Steve Loughran [mailto:stevel@apache.org] 
Sent: Thursday, March 11, 2010 6:25 AM
To: common-dev@hadoop.apache.org
Subject: Re: Map Reduce in heterogeneous environ..

abhishek sharma wrote:
>> No. of slots per task tracker cannot be varied so even if some nodes
>> have additional cores, extra slots cannot be added.
> True. This is what I have been wishing for;-) I routinely use clusters
> where some machines have 8 while others have 4 cores.

Varying the #of task slots per node is trivial. Every TT reports the #of 
avaialable slots. Therefore you need a separate config file for every 
class of node in your cluster, set the 
mapred.tasktracker.map.tasks.maximum and 
mapred.tasktracker.reduce.tasks.maximum values to the limits for those 
machines, push out the right config file to the right target machines.

If you don't have a way of providing different configurations to 
different machines in your cluster, the problem lies with your 
configuration management tooling/policy, not Hadoop.

What we dont have (today) is the ability of a live TT to vary its slots 
based on other system information, so if the machine is also accepting 
workloads from some grid scheduler the TT can't look at the number of 
live grid jobs or the IO load and use that to reduce its slot count. 
Contributions there would be welcomed by those people that share compute 
nodes on different workloads.


The information contained in this communication may be CONFIDENTIAL and is intended only for
the use of the recipient(s) named above.  If you are not the intended recipient, you are hereby
notified that any dissemination, distribution, or copying of this communication, or any of
its contents, is strictly prohibited.  If you have received this communication in error, please
notify the sender and delete/destroy the original message and any copy of it from your computer
or paper files.

View raw message