hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amar Kamat <ama...@yahoo-inc.com>
Subject Re: Sharing Hadoop slave nodes between multiple masters?
Date Wed, 26 Mar 2008 04:33:29 GMT
On Tue, 25 Mar 2008, Nate Carlson wrote:

> Is it possible to have a single slave process jobs for multiple masters?
There are two types of slaves and 2 corresponding masters in Hadoop. The 2
masters are Namenode and JobTracker while the slaves are datanodes and
tasktrackers resp. Each slave when started has a hardcoded master
information in the config that is passed to the slave during the start-up.
So sharing would not be possible.
>
> If not, I guess I'll just run multiple slaves on the same machines.  ;)
Yes. Seems like. But be sure to have a control on the number of tasks that
can be run on the machine. Commonly used config is 4 maps and 4 reducers
for a (mapred) slave that is not shared (i.e per machine). Try to make
sure that the total tasks that can be run simultaneously is descent
enough.
Amar
>
> (Trying to share slaves for our dev/staging/qa environments)
>
> Thanks!
>
> -Nate
>

Mime
View raw message