hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sanjay Radia (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1701) Provide a security framework design
Date Wed, 28 Nov 2007 22:55:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12546460
] 

Sanjay Radia commented on HADOOP-1701:
--------------------------------------

The concept of an abstract ticket along with the loginCredentials and Security factory in
the attached patch can allow us to create a portable layer for plugging in different authentication
technologies. 

Java already has such a portable plug-in layer for authentication and authrization:  Java's
Jaas and GSS APIs.  If we can leverage these APIs then we will have free access to the various
JAAS and GSS plug-in for Kerberos, LDAP, Unix etc. Furthermore I believe GSS is cross language.

The problem is that we don't have the time to figure this all out how to exactly use the Java
APIs for release 0.16 and we want to get permissions into 0.16.

 From what I have seen, I am not sure if the patch's  ticket, loginCredentials and Security
factory abstractions will help - they do not seem to fit into the java authentication APIs.
Java provides some basic constructs: subject, loginModule and loginContext .  Subject overlaps
partly with our concept of ticket. The loginModule/Context overlaps with our security factory.
If we use the Java APIs then the HDFS code mostly does *not* need to touch the "tickets" (except
when we pass it to the job tracker). 

Of course we are free to ignore the entire java framework and build our own.  But that is
a big undertaking.
I propose that we do NOT define the type Ticket and the loginCredentials and Security factory
for now. 

For now the class called UserGroupInfo (see Hadoop 2229) which extends writable is sufficient.

The userGroupInfo can be passed across at connection establishment.
For example, for the socket creation, the socket factory can take a parameter of type writable:
    getClientSocketFactory(Writable authenticationInfo);

The writable authenticationInfo can be written into the socket and read at the other end.

Currently we simply need to make the UserGroupInfo to be writable.   
I don't think the rpc layer needs to do anything besides read and write the authenticationInfo
(which is really the tickets).

After we get some basic permission-checking feature into HDFS, lets try and see if we can
fit this into the Java security framework. If we find that it does not then I suggest we define
our own framework along the lines of this patch.


> Provide a security framework design
> -----------------------------------
>
>                 Key: HADOOP-1701
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1701
>             Project: Hadoop
>          Issue Type: New Feature
>    Affects Versions: 0.15.0
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: Tsz Wo (Nicholas), SZE
>             Fix For: 0.16.0
>
>         Attachments: 1701_20071109.patch
>
>
> Only provide a security framework as described below.  A simple implementation will be
provided in HADOOP-2229.
> h4._Previous Description_
> In HADOOP-1298, we want to add user information and permission to the file system.  It
requires an authentication service and a user management service.  We should provide a framework
and a simple implementation in issue and extend it later.  As discussed in HADOOP-1298, the
framework should be extensible and pluggable.
> - Extensible: possible to extend the framework to the other parts (e.g. map-reduce) of
Hadoop.
> - Pluggable: can easily switch security implementations.  Below is a diagram borrowed
from Java.
> !http://java.sun.com/javase/6/docs/technotes/guides/security/overview/images/3.jpg!
> - Implement a Hadoop authentication center (HAC).  In the first step, the mechanism of
HAC is very simple, it keeps track a list of usernames (we only support users, will work on
other principals later) in HAC and verify username in user login (yeah, no password).  HAC
can run inside NameNode or run as a stand alone server.   We will probably use Kerberos to
provide more sophisticated authentication service.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message