hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Prasan Ary <voicesnthed...@yahoo.com>
Subject Re: Hadoop on EC2 for large cluster
Date Thu, 20 Mar 2008 18:56:41 GMT
Chris,
  What do you mean when you say boot the slaves with "the master private name" ?
   
   
  =======================

Chris K Wensel <chris@wensel.net> wrote:
  I found it much better to start the master first, then boot the slaves 
with the master private name.

i do not use the start|stop-all scrips, so i do not need to maintain 
the slaves file. thus i don't need to push private keys around to 
support those scripts.

this lets me start 20 nodes, then add 20 more later. or kill some.

btw, get ganglia installed. life will be better knowing what's going on.

also, setting up FoxyProxy on firefox lets you browse your whole 
cluster if you setup a ssh tunnel (socks).

On Mar 20, 2008, at 10:15 AM, Prasan Ary wrote:
> Hi All,
> I have been trying to configure Hadoop on EC2 for large number of 
> clusters ( 100 plus). It seems that I have to copy EC2 private key 
> to all the machines in the cluster so that they can have SSH 
> connections.
> For now it seems I have to run a script to copy the key file to 
> each of the EC2 instances. I wanted to know if there is a better way 
> to accomplish this.
>
> Thanks,
> PA
>
>
> ---------------------------------
> Never miss a thing. Make Yahoo your homepage.

Chris K Wensel
chris@wensel.net
http://chris.wensel.net/





       
---------------------------------
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.
Mime
  • Unnamed multipart/alternative (inline, 8-Bit, 0 bytes)
View raw message