hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Puri, Aseem" <Aseem.P...@Honeywell.com>
Subject RE: How to access data node without a passphrase?
Date Wed, 22 Apr 2009 04:35:57 GMT
Arber,

A. You have to first setup authorization keys

1. Execute the following command to generate keys: "ssh-keygen" 
2. When prompted for filenames and pass phrases press ENTER to accept
default values. 
3. After the command has finished generating keys, enter the following
command to change into your .ssh directory: "cd ~/.ssh"
4. To register the new authorization keys enter the following command: 
"cat id_rsa.pub >> authorized_keys"
	
B. Generate public/private key pairs on all your machines

1. Issue the following commands ($> is the command prompt):
	a) ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
	b) cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
2. When prompted for your password, enter it. 

C. Exchange public keys

On the master issue the following command 
$ scp ~/.ssh/id_dsa.pub <slaveusername>@slave:~/.ssh/master-key.pub

Enter your password when prompted. This will copy your public key file
in use on the master to the slave.
On the slave, issue the following command:
$ cat ~/.ssh/master-key.pub >> ~/.ssh/authorized_keys

This will append your public key to the set of authorized keys the slave
accepts for authentication purposes.

Hope this helps

Aseem Puri

-----Original Message-----
From: Alex Loddengaard [mailto:alex@cloudera.com] 
Sent: Wednesday, April 22, 2009 9:55 AM
To: core-user@hadoop.apache.org
Subject: Re: How to access data node without a passphrase?

I would recommend installing the Hadoop RPMs and avoid the start-all
scripts
all together.  The RPMs ship with init scripts, allowing you to start
and
stop daemons with /sbin/service (or with a configuration management
tool,
which I assume you'll be using as your cluster grows).  Here's more info
on
the RPMs:

<http://www.cloudera.com/hadoop>

The start-all scripts are easy ways for small clusters to get started /
stopped, but they're more annoying as your cluster grows (you have to
distribute authorized_keys files, iteratively start each daemon, etc).
If
you want to stick with the tarball, you can use bin/hadoop-daemon.sh on
each
node as well, though the only thing this buys you is being able to avoid
shipping your public key around for the "hadoop" user:

bin/hadoop-daemon.sh start datanode
> bin/hadoop-daemon.sh start tasktracker
> bin/hadoop-daemon.sh start etc
>

Hope this helps.

Alex

On Tue, Apr 21, 2009 at 8:56 PM, Yabo-Arber Xu
<arber.research@gmail.com>wrote:

> Hi there,
>
> I setup a small cluster for testing. When I start my cluster on my
master
> node, I have to type the password for starting each datanode and
> tasktracker. That's pretty annoying and may be hard to handle when the
> cluster grows. Any graceful way to handle this?
>
> Best,
> Arber
>

Mime
View raw message