hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From zGreenfelder <zgreenfel...@gmail.com>
Subject Re: Best practices - Large Hadoop Cluster
Date Wed, 11 Aug 2010 02:17:29 GMT
On Tue, Aug 10, 2010 at 4:06 PM, Raj V <rajvish@yahoo.com> wrote:
> Mike
> 512 nodes, even a minute for each node ( ssh-ing to each node, typing a 8

> Thanks to others for useful suggestions. I will examine them and post a summary
> if anyone is interested.
>
> Raj
>

I may well be oversimplifing things, and of course security is always
a concern... but wouldn't it make more sense to generate the ssh key
on one of your central admin machine, then create the authorized_keys
file based on that, and make the system network  installer (pxe,
kickstart, autoyast or whatever) install the authorized_keys file for
a particular user ID?

so all machines for a given ID would have the exact same auth key file
(and perhaps the same id/key if you want the hosts to be able to cross
access.  (e.g. from central admin -> node.x and from node.x ->
node.y).   the down side would be that one lost key would give away
the whole kingdom, instead of a single host... but security is always
a balance between risk and usability.

baring that... maybe configure everything to use kerberos
authentication in sshd and setup/maintain all of that wonderful fun.
although that's a whole can of worms I'd be very reluctant to open,
personally.

-- 
Even the Magic 8 ball has an opinion on email clients: Outlook not so good.

Mime
View raw message