hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9533) Hadoop SSO/Token Service
Date Wed, 01 May 2013 20:14:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13646887#comment-13646887
] 

Daryn Sharp commented on HADOOP-9533:
-------------------------------------

Maybe all relevant parties should meet up during the Hadoop Summit, assuming we're all going?

In a tiny nutshell, although maybe not so seemingly small:
* support multiple SASL mechanisms
* support negotiation of SASL mechanisms
* support multiple protocols per mechanism
* add server id hints for sasl clients
** support kerberos auth to servers with arbitrary service principals
** completely decouple host/ip from tokens
* aforementioned supports servers with multiple NICs
* clients may access a server via any hostname, ip, or even CNAME for a server

Currently a RPC client will based upon its own config decide the one and only SASL auth mechanism
to use.  If the server doesn't support that mechanism, the only option the server has is reject
or tell the client to do simple (insecure) auth.  Pre-connection, the client guesses if it
can use and token a assumes it can find it based on host or ip.

This does not work in a heterogeneous security environment.  If client supports A & B
auth, server does only B:  if the client dictates A, the server must reject because there's
no way to negotiate B.  The client is also ill equipped to know if it has a token w/o a server
hint.

Kerberos authentication does not work across realms w/o cross-realm trust.  The client cannot
connect to those NNs using different realms because the client assumes all NN service principals
can be divined by subbing _HOST in user/_HOST@REALM.

I'm changing the sasl auth sequence so the server advertises mechanisms in preferred order.
 Client instantiates the first client that it supports.  Using the javax sasl factory framework,
we can decouple hardcoded instantiations of the sasl clients in order to support multiple
auth methods that are dynamically loaded.

Auth methods and SASL mechanisms are hardcoded with support for one and only one auth method
per mechanism.  I'm extending the SASL negotiation to use both mechanism & protocol so
we can support protocols over DIGEST-MD5 other than delegation token.  This would allow someone
to implement ldap.

The client no longer assumes it knows the token required by the server, or the service principal
for kerberos.  The server's advertisement of mechanisms will return DIGEST-MD5/token/server-id
if it supports tokens, or GSSAPI/krb5/service-principal.  The client will use the server-id
to find a token, or the correct service principal to get a TGS.  This enables support for
multiple NICs, ips, hostnames, etc.  CNAMEs will also be supported.

This could eventually lead to the client authenticating on demand instead of assuming it knows
how to login at startup.
                
> Hadoop SSO/Token Service
> ------------------------
>
>                 Key: HADOOP-9533
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9533
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: security
>            Reporter: Larry McCay
>
> This is an umbrella Jira filing to oversee a set of proposals for introducing a new master
service for Hadoop Single Sign On (HSSO).
> There is an increasing need for pluggable authentication providers that authenticate
both users and services as well as validate tokens in order to federate identities authenticated
by trusted IDPs. These IDPs may be deployed within the enterprise or third-party IDPs that
are external to the enterprise.
> These needs speak to a specific pain point: which is a narrow integration path into the
enterprise identity infrastructure. Kerberos is a fine solution for those that already have
it in place or are willing to adopt its use but there remains a class of user that finds this
unacceptable and needs to integrate with a wider variety of identity management solutions.
> Another specific pain point is that of rolling and distributing keys. A related and integral
part of the HSSO server is library called the Credential Management Framework (CMF), which
will be a common library for easing the management of secrets, keys and credentials.
> Initially, the existing delegation, block access and job tokens will continue to be utilized.
There may be some changes required to leverage a PKI based signature facility rather than
shared secrets. This is a means to simplify the solution for the pain point of distributing
shared secrets.
> This project will primarily centralize the responsibility of authentication and federation
into a single service that is trusted across the Hadoop cluster and optionally across multiple
clusters. This greatly simplifies a number of things in the Hadoop ecosystem:
> 1.	a single token format that is used across all of Hadoop regardless of authentication
method
> 2.	a single service to have pluggable providers instead of all services
> 3.	a single token authority that would be trusted across the cluster/s and through PKI
encryption be able to easily issue cryptographically verifiable tokens
> 4.	automatic rolling of the token authority’s keys and publishing of the public key
for easy access by those parties that need to verify incoming tokens
> 5.	use of PKI for signatures eliminates the need for securely sharing and distributing
shared secrets
> In addition to serving as the internal Hadoop SSO service this service will be leveraged
by the Knox Gateway from the cluster perimeter in order to acquire the Hadoop cluster tokens.
The same token mechanism that is used for internal services will be used to represent user
identities. Providing for interesting scenarios such as SSO across Hadoop clusters within
an enterprise and/or into the cloud.
> The HSSO service will be comprised of three major components and capabilities:
> 1.	Federating IDP – authenticates users/services and issues the common Hadoop token
> 2.	Federating SP – validates the token of trusted external IDPs and issues the common
Hadoop token
> 3.	Token Authority – management of the common Hadoop tokens – including: 
>     a.	Issuance 
>     b.	Renewal
>     c.	Revocation
> As this is a meta Jira for tracking this overall effort, the details of the individual
efforts will be submitted along with the child Jira filings.
> Hadoop-Common would seem to be the most appropriate home for such a service and its related
common facilities. We will also leverage and extend existing common mechanisms as appropriate.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message