hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jakob Homan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-1150) Verify datanodes' identities to clients in secure clusters
Date Sat, 31 Jul 2010 01:22:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-1150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12894236#action_12894236
] 

Jakob Homan commented on HDFS-1150:
-----------------------------------

Sure. -1 on allowing unsecured datanodes to join a secure cluster, and at the moment Hadoop
doesn't have a non-jsvc way of securing/verifying datanodes' ports.

Currently, we secure the datanodes via jsvc, and the reasons for doing so were discussed extensively
on this JIRA.  Were we to allow the behavior requested, a mis-configured cluster could end
up partially unsecured with no warning that it is in such a state, which is not acceptable.

What you're asking for is essentially to make securing the datanodes' non-RPC ports pluggable,
which we fully expect and plan to do.  I'll open a JIRA to make datanode-port security pluggable
once 1150 has been finished off.  jsvc was a reliable solution to a problem discovered very
late in security's development, which has worked very well on our production clusters, but
certainly still has the odor of a hack about it.  All that's needed is a way of auditing and
verifying that the ports we're running are on are secure by Ops' estimation; jsvc, SELinux,
AppArmor will all be reasonable ways of fulfilling such a contract. 

But until we actually have a plan to implement this in a reliable, verifiable and documented
way, it's best to err on the side of caution and security and provide as much guarantee as
possible that the datanodes are indeed secure in a secure cluster.  Until we support non-jsvc
methods of doing this, it's not going to work to have a non-jsvc verified datanode.

As far as a config as mentioned above, it would essentially be my.cluster.is.secure.except.for.this.one.attack.vector,
which is not a good idea for the same reasons as above - it's a huge configuration mistake
waiting to happen - and moreover will be unnecessary once a fully pluggable system is in place.
 The one place it would be very useful and justifiable would be for developer testing, since
it is a serious pain to start up these secure nodes while doing development now.

> Verify datanodes' identities to clients in secure clusters
> ----------------------------------------------------------
>
>                 Key: HDFS-1150
>                 URL: https://issues.apache.org/jira/browse/HDFS-1150
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: data-node
>    Affects Versions: 0.22.0
>            Reporter: Jakob Homan
>            Assignee: Jakob Homan
>         Attachments: commons-daemon-1.0.2-src.tar.gz, HDFS-1150-BF-Y20-LOG-DIRS-2.patch,
HDFS-1150-BF-Y20-LOG-DIRS.patch, HDFS-1150-BF1-Y20.patch, hdfs-1150-bugfix-1.1.patch, hdfs-1150-bugfix-1.2.patch,
hdfs-1150-bugfix-1.patch, HDFS-1150-trunk.patch, HDFS-1150-Y20-BetterJsvcHandling.patch, HDFS-1150-y20.build-script.patch,
HDFS-1150-Y20S-ready-5.patch, HDFS-1150-Y20S-ready-6.patch, HDFS-1150-Y20S-ready-7.patch,
HDFS-1150-Y20S-ready-8.patch, HDFS-1150-Y20S-Rough-2.patch, HDFS-1150-Y20S-Rough-3.patch,
HDFS-1150-Y20S-Rough-4.patch, HDFS-1150-Y20S-Rough.txt
>
>
> Currently we use block access tokens to allow datanodes to verify clients' identities,
however we don't have a way for clients to verify the authenticity of the datanodes themselves.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message