hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured
Date Tue, 12 Mar 2013 16:53:14 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13600174#comment-13600174

Daryn Sharp commented on HADOOP-9299:

I'll try to find my patch, but it was incomplete, and I tried various approaches.  First,
I tried to avoid using {{KerberosName}} if possible, but that caused problems because kerberos
is always favored regardless of the client config.  I actually think that's a good thing,
more on that later.  I believe the last approach was to blindly strip the realm if the client
is insecure.

My problem is running an insecure mini-cluster on my laptop.  If I have no TGT, the unix principal
has no realm so passes the rules.  If I do have a TGT (bound to corporate directory), it wants
to strip the realm from the kerberos principal.  If the default realm cannot be determined,
or does not match my principal's realm, the rules fail.  

On favoring kerberos:  If I'm kerberos authenticated, it stands to reason that's who I am
so the user should be derived from the kerberos principal regardless of whether security is
enabled/disabled on the client.  Similarly, shouldn't an "insecure" client be allowed to communicate
with a secure cluster if the user has the necessary kerberos credentials?  Maybe I'm trying
to copy data between secure/insecure cluster.

I spoke to Owen awhile back about this issue, and we agreed that the client should be able
to use kerberos credentials regardless of the client config.  Where we had mild disagreement
is whether the client should be trying to apply the name rules.  I'd make the case that the
client should never apply rules which are meant for arbitrarily rewriting the principal. 
All we are using the rules for on the client is stripping the default realm - if the client
changes the username in the principal I believe it's going to fail the kerberos auth with
the server due to a mismatch with the TGT.  Only the server should use the rules to arbitrarily
rewrite the principal into a simple username for the namesystem.  The problem is how some
of the fields in a token are converted to a simple username on the client, which Owen and
I agreed is probably wrong.  We might be able to fix this w/o causing incompatibility.

On a tangent: These issues illustrate the growing problem with not being able to have (semi)universal
configs that allow communicating with multiple clusters.  Client security setting shouldn't
matter, client shouldn't need the server's name rules.
> kerberos name resolution is kicking in even when kerberos is not configured
> ---------------------------------------------------------------------------
>                 Key: HADOOP-9299
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9299
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.0.3-alpha
>            Reporter: Roman Shaposhnik
>            Priority: Blocker
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop from the
RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT],
ErrorCode [JA009], Message [JA009: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule:
No rules applied to yarn/localhost@LOCALREALM
>         at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.<init>(AbstractDelegationTokenIdentifier.java:68)
>         at org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.<init>(MRDelegationTokenIdentifier.java:51)
>         at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
>         at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
>         at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule:
No rules applied to yarn/localhost@LOCALREALM
>         at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
>         at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.<init>(AbstractDelegationTokenIdentifier.java:66)
>         ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this is a Hadoop
issue rather than the oozie one is because when I hack /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>    ticket_lifetime = 600
>    default_realm = LOCALHOST
>    default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>    default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>    LOCALHOST = {
>        kdc = localhost:88
>        default_domain = .local
>    }
> [domain_realm]
>    .local = LOCALHOST
> [logging]
>    kdc = FILE:/var/log/krb5kdc.log
>    admin_server = FILE:/var/log/kadmin.log
>    default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it should NOT
pay attention to /etc/krb5.conf to begin with.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message