From hdfs-issues-return-210728-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Wed Feb 21 07:16:08 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id BC3FD18061A for ; Wed, 21 Feb 2018 07:16:07 +0100 (CET) Received: (qmail 80870 invoked by uid 500); 21 Feb 2018 06:16:06 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 80859 invoked by uid 99); 21 Feb 2018 06:16:06 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Feb 2018 06:16:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 12F03C0010 for ; Wed, 21 Feb 2018 06:16:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.31 X-Spam-Level: X-Spam-Status: No, score=-110.31 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100, WEIRD_PORT=0.001] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id OkvMpFUqEO_E for ; Wed, 21 Feb 2018 06:16:04 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id CD1845F5FB for ; Wed, 21 Feb 2018 06:16:03 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id F2E23E0056 for ; Wed, 21 Feb 2018 06:16:02 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 267282255B for ; Wed, 21 Feb 2018 06:16:00 +0000 (UTC) Date: Wed, 21 Feb 2018 06:16:00 +0000 (UTC) From: "Xiao Chen (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-13040) Kerberized inotify client fails despite kinit properly MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-13040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13040: ----------------------------- Attachment: HDFS-13040.05.patch > Kerberized inotify client fails despite kinit properly > ------------------------------------------------------ > > Key: HDFS-13040 > URL: https://issues.apache.org/jira/browse/HDFS-13040 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.6.0 > Environment: Kerberized, HA cluster, iNotify client, CDH5.10.2 > Reporter: Wei-Chiu Chuang > Assignee: Wei-Chiu Chuang > Priority: Major > Attachments: HDFS-13040.001.patch, HDFS-13040.02.patch, HDFS-13040.03.patch, HDFS-13040.04.patch, HDFS-13040.05.patch, HDFS-13040.half.test.patch, TestDFSInotifyEventInputStreamKerberized.java, TransactionReader.java > > > This issue is similar to HDFS-10799. > HDFS-10799 turned out to be a client side issue where client is responsible for renewing kerberos ticket actively. > However we found in a slightly setup even if client has valid Kerberos credentials, inotify still fails. > Suppose client uses principal hdfs@EXAMPLE.COM, > namenode 1 uses server principal hdfs/nn1.example.com@EXAMPLE.COM > namenode 2 uses server principal hdfs/nn2.example.com@EXAMPLE.COM > *After Namenodes starts for longer than kerberos ticket lifetime*, the client fails with the following error: > {noformat} > 18/01/19 11:23:02 WARN security.UserGroupInformation: PriviledgedActionException as:hdfs@GCE.CLOUDERA.COM (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(java.io.IOException): We encountered an error reading https://nn2.example.com:8481/getJournal?jid=ns1&segmentTxId=8662&storageInfo=-60%3A353531113%3A0%3Acluster3, https://nn1.example.com:8481/getJournal?jid=ns1&segmentTxId=8662&storageInfo=-60%3A353531113%3A0%3Acluster3. During automatic edit log failover, we noticed that all of the remaining edit log streams are shorter than the current one! The best remaining edit log ends at transaction 8683, but we thought we could read up to transaction 8684. If you continue, metadata will be lost forever! > at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:213) > at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85) > at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.readOp(NameNodeRpcServer.java:1701) > at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getEditsFromTxid(NameNodeRpcServer.java:1763) > at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getEditsFromTxid(AuthorizationProviderProxyClientProtocol.java:1011) > at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getEditsFromTxid(ClientNamenodeProtocolServerSideTranslatorPB.java:1490) > at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210) > {noformat} > Typically if NameNode has an expired Kerberos ticket, the error handling for the typical edit log tailing would let NameNode to relogin with its own Kerberos principal. However, when inotify uses the same code path to retrieve edits, since the current user is the inotify client's principal, unless client uses the same principal as the NameNode, NameNode can't do it on behalf of the client. > Therefore, a more appropriate approach is to use proxy user so that NameNode can retrieving edits on behalf of the client. > I will attach a patch to fix it. This patch has been verified to work for a CDH5.10.2 cluster, however it seems impossible to craft a unit test for this fix because the way Hadoop UGI handles Kerberos credentials (I can't have a single process that logins as two Kerberos principals simultaneously and let them establish connection) > A possible workaround is for the inotify client to use the active NameNode's server principal. However, that's not going to work when there's a namenode failover, because then the client's principal will not be consistent with the active NN's one, and then fails to authenticate. > Credit: this bug was confirmed and reproduced by [~pifta] and [~r1pp3rj4ck] -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org