hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rainer Toebbicke <...@pclella.cern.ch>
Subject Re: adding node(s) to Hadoop cluster
Date Thu, 11 Dec 2014 09:13:30 GMT

Le 10 déc. 2014 à 20:08, Vinod Kumar Vavilapalli <vinodkv@hortonworks.com> a écrit
:

> You don't need patterns for host-names, did you see the support for _HOST in the principle
names? You can specify the datanode principle to be say datanodeUser@_HOST@realm, and Hadoop
libraries interpret and replace _HOST on each machine with the real host-name.

Thanks, I may be mistaken, but I suspect you missed the point:

for me, auth_to_local's role is to protect the server(s). For example,  somebody on an untrusted
"client" can disguise as hdfs/nodename@REALM and hence take over hdfs through a careless principal->id
translation. A well-configured auth_to_local will deflect that rogue "hdfs" to "nobody" or
something, so a malicious client cannot do a "hdfs dfs -chown ..." for example.

The _HOST construct makes using the same config files throughout the cluster easier indeed,
but as far as I see it mainly applies to the "client".

On the server, I see no way other than auth_to_local with a list/pattern of trusted node names
(on namenode and every datanode in the hdfs case) to prevent the scenario above. Would there
be?

Thanks, Rainer
Mime
View raw message