hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xiao Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-13040) Kerberized inotify client fails despite kinit properly
Date Wed, 21 Feb 2018 06:16:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370999#comment-16370999
] 

Xiao Chen commented on HDFS-13040:
----------------------------------

Thanks for the review Daryn.

Patch 5 attached to address the comments except the 'current user' one. I agree it's the most
correct thing to do, but maybe we can leave it out to a future jira.
{quote} floating the doAs login user up to {{getEditsFromTxid}} 
{quote}
Good idea, done this way and left the stream class untouched.
{quote}Could the unit test just explicitly set the conf keys
{quote}
Not really, because the journal part of the QJMHA cluster needs to be started first for us
to know the correct journal URI, so we can't know the uri beforehand. {{initHAConf}} currently
sets the shared edits dir key, presumably for the same reason.
{quote}the test
{quote}
Good catch, and helpful explanations. Addressed by using the correct UGIs. hdfs@ is the client,
and hdfs/localhost@ is the NN user. Verified I can see the big beautiful gssapi stack trace
without the fix.

1 odd thing I found in the test though is I had to set the proxy users for it to work, otherwise
the mkdirs after the relogin would throw
{quote}AuthorizationException): User: hdfs/localhost@EXAMPLE.COM is not allowed to impersonate
hdfs@EXAMPLE.COM
{quote}
at me. Debugging this, it seems to be a designed rpc server auth behavior from this [code|https://github.com/apache/hadoop/blob/121e1e1280c7b019f6d2cc3ba9eae1ead0dd8408/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java#L2260].
Though my debugging shows the {{protocolUser}} is {{hdfs@ (auth:SIMPLE)}}, while the {{realUser}}
is {{hdfs/localhost@ (auth:KERBEROS)}}, still weird....

 

> Kerberized inotify client fails despite kinit properly
> ------------------------------------------------------
>
>                 Key: HDFS-13040
>                 URL: https://issues.apache.org/jira/browse/HDFS-13040
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.6.0
>         Environment: Kerberized, HA cluster, iNotify client, CDH5.10.2
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>            Priority: Major
>         Attachments: HDFS-13040.001.patch, HDFS-13040.02.patch, HDFS-13040.03.patch,
HDFS-13040.04.patch, HDFS-13040.05.patch, HDFS-13040.half.test.patch, TestDFSInotifyEventInputStreamKerberized.java,
TransactionReader.java
>
>
> This issue is similar to HDFS-10799.
> HDFS-10799 turned out to be a client side issue where client is responsible for renewing
kerberos ticket actively.
> However we found in a slightly setup even if client has valid Kerberos credentials, inotify
still fails.
> Suppose client uses principal hdfs@EXAMPLE.COM, 
>  namenode 1 uses server principal hdfs/nn1.example.com@EXAMPLE.COM
>  namenode 2 uses server principal hdfs/nn2.example.com@EXAMPLE.COM
> *After Namenodes starts for longer than kerberos ticket lifetime*, the client fails with
the following error:
> {noformat}
> 18/01/19 11:23:02 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs@GCE.CLOUDERA.COM
(auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): We encountered
an error reading https://nn2.example.com:8481/getJournal?jid=ns1&segmentTxId=8662&storageInfo=-60%3A353531113%3A0%3Acluster3,
https://nn1.example.com:8481/getJournal?jid=ns1&segmentTxId=8662&storageInfo=-60%3A353531113%3A0%3Acluster3.
 During automatic edit log failover, we noticed that all of the remaining edit log streams
are shorter than the current one!  The best remaining edit log ends at transaction 8683, but
we thought we could read up to transaction 8684.  If you continue, metadata will be lost forever!
>         at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:213)
>         at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.readOp(NameNodeRpcServer.java:1701)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEditsFromTxid(NameNodeRpcServer.java:1763)
>         at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getEditsFromTxid(AuthorizationProviderProxyClientProtocol.java:1011)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEditsFromTxid(ClientNamenodeProtocolServerSideTranslatorPB.java:1490)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
> {noformat}
> Typically if NameNode has an expired Kerberos ticket, the error handling for the typical
edit log tailing would let NameNode to relogin with its own Kerberos principal. However, when
inotify uses the same code path to retrieve edits, since the current user is the inotify client's
principal, unless client uses the same principal as the NameNode, NameNode can't do it on
behalf of the client.
> Therefore, a more appropriate approach is to use proxy user so that NameNode can retrieving
edits on behalf of the client.
> I will attach a patch to fix it. This patch has been verified to work for a CDH5.10.2
cluster, however it seems impossible to craft a unit test for this fix because the way Hadoop
UGI handles Kerberos credentials (I can't have a single process that logins as two Kerberos
principals simultaneously and let them establish connection)
> A possible workaround is for the inotify client to use the active NameNode's server principal.
However, that's not going to work when there's a namenode failover, because then the client's
principal will not be consistent with the active NN's one, and then fails to authenticate.
> Credit: this bug was confirmed and reproduced by [~pifta] and [~r1pp3rj4ck]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message