hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gokulakannan M <gok...@huawei.com>
Subject RE: Best practices - Large Hadoop Cluster
Date Tue, 10 Aug 2010 18:13:12 GMT

Hi Raj,

	As per my understanding the problem is with ssh password each time
you start/stop the cluster. You need password less startup shutdown right.?

	Here is my way of overcoming the ssh problem 

	Write a shell script as follows:
	1. Generate a ssh key from the namenode machine (where you will
start/stop the cluster)

	2. Read each entry from the conf/slaves file and do the following
		2.1 add the key you generated in step 1 to the ssh
authorized_keys file of the datanode machine that you got in step 2
something like below script
			cat $HOME/.ssh/public_key_file | ssh username@host '
cat >> $HOME/.ssh/authorized_keys'

	3. Repeat step 2 for conf/masters also

	Note: Password must be specified for the specified username@host
first time since the ssh command given in point 2.1 requires it. 
	Now you can start/stop your hadoop cluster without ssh password


-----Original Message-----
From: Raj V [mailto:rajvish@yahoo.com] 
Sent: Tuesday, August 10, 2010 7:16 PM
To: common-user@hadoop.apache.org
Subject: Best practices - Large Hadoop Cluster

I need to start setting up a large - hadoop cluster of 512 nodes . My
problem is the SSH keys. Is there a simpler way of generating and exchanging
keys among the nodes? Any best practices? If there is none, I could
volunteer to 
do it,


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message