phoenix-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karan Mehta (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-5198) GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
Date Thu, 25 Apr 2019 05:20:00 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825727#comment-16825727
] 

Karan Mehta commented on PHOENIX-5198:
--------------------------------------

[~zoeminghong] In general, please post questions on users mailing list.

At the first glance it seems like the krb5 config and jaas files should be configured accordingly
and should be set as system properties, which are missing here ({{java.security.auth.login.config)}} 

Closing this Jira.

> GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos
tgt)
> ----------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5198
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5198
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.0.0
>         Environment: >HDP 3.0.0
> >Phoenix 5.0.0
> >HBase 2.0.0
> >Spark 2.3.1
> >Hadoop 3.0.1
>   
>            Reporter: gejx
>            Priority: Blocker
>         Attachments: application_1551919460625_0204.txt
>
>
> I re-run the program, the code is as follows:
> code
> {code:java}
> @transient val confWrap = new Configuration()
> confWrap.set("hbase.zookeeper.quorum", missionSession.config.zkQuorum)
> confWrap.set("zookeeper.znode.parent", "/hbase-secure")
> confWrap.set("hbase.zookeeper.property.clientPort", "2181")
> confWrap.set("hadoop.security.authentication", "kerberos")
> confWrap.set("hbase.security.authentication", "kerberos")
> confWrap.set("hbase.myclient.keytab", missionSession.config.keytab)
> confWrap.set("hbase.myclient.principal", missionSession.config.principal)
> @transient val ugi: UserGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI(missionSession.config.principal,
missionSession.config.keytab)
> ugi.doAs(new PrivilegedExceptionAction[Unit] {
> override def run(): Unit = {
> val df: DataFrame = sqlContext.phoenixTableAsDataFrame(missionSession.config.tableName,
Seq("ID", "NAME"), zkUrl = Some(missionSession.config.zkUrl), conf = confWrap)
> df.show(2)
> }
> }){code}
> The parameters I submitted are as follows:
> {code:java}
> spark-submit --master yarn --name PHOENIX_SPARK_PLUGIN --deploy-mode cluster --driver-memory
1024M --executor-memory 1024M --num-executors 2 --executor-cores 1 --keytab /path/testdmp.keytab
--principal dmp@TESTDIP.ORG --conf spark.yarn.maxAppAttempts=1 --conf spark.driver.extraJavaOptions="-Dlog4j.configuration=log4j.properties"
--conf spark.executor.extraJavaOptions="-Dlog4j.configuration=log4j.properties" /opt/workspace/plugin/phoenix-spark-plugin-example-1.11.0-SNAPSHOT-jar-with-dependencies.jar
"DMP_CONF={\"spark\":{\"sparkMaster\":\"yarn\"},\"zkUrl\":\"jdbc:phoenix:test-dmp5.fengdai.org,test-dmp3.fengdai.org,test-dmp4.fengdai.org\",\"tableName\":\"DMP.DMP_TEST\"
,\"isDS\":true,\"zkQuorum\":\"test-dmp5.fengdai.org,test-dmp3.fengdai.org,test-dmp4.fengdai.org\",\"keytab\":\"/path/testdmp.keytab\",\"principal\":\"dmp@TESTDIP.ORG\"}"{code}
>  
> I tried to add keytab information to the url, but that didn't work. By reading the source
code, the keytab information is retrieved from conf when the login is checked. So I configured
it accordingly:
> The conf for the sample:
> {code:java}
> confWrap.set("hbase.myclient.keytab", missionSession.config.keytab)
> confWrap.set("hbase.myclient.principal", missionSession.config.principal){code}
> The url for the sample:
> {code:java}
> jdbc:phoenix:test-dmp5.fengdai.org,test-dmp3.fengdai.org,test-dmp4.fengdai.org:dmp@TESTDIP.ORG:/path/testdmp.keytab{code}
> The submission parameter contains keytab information, driver can parse SQL,Excutor performed
the re-login operation, but still threw the exception GSSException,The excutor log shows "PrivilegedAction
as DMP ". Why does relogin not change the current UGI?
>  
> driver-log:
> {code:java}
> DEBUG UserGroupInformation: hadoop login
> DEBUG UserGroupInformation: hadoop login commit
> DEBUG UserGroupInformation: using local user:UnixPrincipal: dmp
> DEBUG UserGroupInformation: Using user: "UnixPrincipal: dmp" with name dmp
> DEBUG UserGroupInformation: User entry: "dmp"
> DEBUG UserGroupInformation: Reading credentials from location set in HADOOP_TOKEN_FILE_LOCATION:
/hadoop/yarn/local/usercache/dmp/appcache/application_1551919460625_0199/container_e27_1551919460625_0199_01_000001/container_tokens
> DEBUG UserGroupInformation: Loaded 3 tokens
> DEBUG UserGroupInformation: UGI loginUser:dmp (auth:SIMPLE)
> DEBUG UserGroupInformation: hadoop login
> DEBUG UserGroupInformation: hadoop login commit
> DEBUG UserGroupInformation: using kerberos user:dmp@TESTDIP.ORG
> DEBUG UserGroupInformation: Using user: "dmp@TESTDIP.ORG" with name dmp@TESTDIP.ORG
> DEBUG UserGroupInformation: User entry: "dmp@TESTDIP.ORG"
> INFO UserGroupInformation: Login successful for user dmp@TESTDIP.ORG using keytab file
testdmp.keytab-fb56007a-7d7d-4639-bf9e-5726b91901fd
> DEBUG UserGroupInformation: PrivilegedAction as:dmp@TESTDIP.ORG (auth:KERBEROS) from:org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:814)
> DEBUG UserGroupInformation: PrivilegedAction as:dmp@TESTDIP.ORG (auth:KERBEROS) from:org.apache.spark.deploy.yarn.ApplicationMaster.doAsUser(ApplicationMaster.scala:814)
> {code}
> excutor-log:
> {code:java}
> 19/03/14 22:10:08 DEBUG SparkHadoopUtil: creating UGI for user: dmp
> 19/03/14 22:10:08 DEBUG UserGroupInformation: hadoop login
> 19/03/14 22:10:08 DEBUG UserGroupInformation: hadoop login commit
> 19/03/14 22:10:08 DEBUG UserGroupInformation: using local user:UnixPrincipal: dmp
> 19/03/14 22:10:08 DEBUG UserGroupInformation: Using user: "UnixPrincipal: dmp" with name
dmp
> 19/03/14 22:10:08 DEBUG UserGroupInformation: User entry: "dmp"
> 19/03/14 22:10:08 DEBUG UserGroupInformation: Reading credentials from location set in
HADOOP_TOKEN_FILE_LOCATION: /hadoop/yarn/local/usercache/dmp/appcache/application_1551919460625_0204/container_e27_1551919460625_0204_01_000002/container_tokens
> 19/03/14 22:10:08 DEBUG UserGroupInformation: Loaded 3 tokens
> 19/03/14 22:10:08 DEBUG UserGroupInformation: UGI loginUser:dmp (auth:SIMPLE)
> 19/03/14 22:10:08 DEBUG UserGroupInformation: PrivilegedAction as:dmp (auth:SIMPLE) from:org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:64)
> -----------------------------------------------------------------------------------------------------------------------------------------
> 19/03/14 22:10:50 DEBUG UserGroupInformation: hadoop login
> 19/03/14 22:10:50 DEBUG UserGroupInformation: hadoop login commit
> 19/03/14 22:10:50 DEBUG UserGroupInformation: using kerberos user:dmp@TESTDIP.ORG
> 19/03/14 22:10:50 DEBUG UserGroupInformation: Using user: "dmp@TESTDIP.ORG" with name
dmp@TESTDIP.ORG
> 19/03/14 22:10:50 DEBUG UserGroupInformation: User entry: "dmp@TESTDIP.ORG"
> 19/03/14 22:10:50 INFO UserGroupInformation: Login successful for user dmp@TESTDIP.ORG
using keytab file /tesdmp/keytabs/nnjKorRc37PPPjLf/dmp/testdmp.keytab
> ------------------------------------------------------------------------------------------------------------------------------------------
> 19/03/14 22:11:02 DEBUG AbstractHBaseSaslRpcClient: Creating SASL GSSAPI client. Server's
Kerberos principal name is hbase/test-dmp4.fengdai.org@TESTDIP.ORG
> 19/03/14 22:11:03 DEBUG UserGroupInformation: PrivilegedAction as:dmp (auth:SIMPLE) from:org.apache.hadoop.hbase.security.NettyHBaseSaslRpcClientHandler.handlerAdded(NettyHBaseSaslRpcClientHandler.java:106)
> 19/03/14 22:11:03 DEBUG UserGroupInformation: PrivilegedActionException as:dmp (auth:SIMPLE)
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid
credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> {code}
> In this method, relogging does not change current User, ConnectionInfo is cached based
on current User, and the connection is not available at this point:
> Relation URL:https://issues.apache.org/jira/browse/PHOENIX-5145?jql=project%20%3D%20PHOENIX%20AND%20issuetype%20%3D%20Bug
>  
> log:
> {code:java}
> 19/03/14 22:10:51 DEBUG PhoenixDriver: tmp==my current user is dmp (auth:SIMPLE)
> 19/03/14 22:10:51 DEBUG PhoenixDriver: tmp==my login user is dmp@TESTDIP.ORG (auth:KERBEROS){code}
> method:
> {code:java}
> public ConnectionInfo normalize(ReadOnlyProps props, Properties info) throws SQLException
{
> String zookeeperQuorum = this.getZookeeperQuorum();
> Integer port = this.getPort();
> String rootNode = this.getRootNode();
> String keytab = this.getKeytab();
> String principal = this.getPrincipal();
> // Normalize connInfo so that a url explicitly specifying versus implicitly inheriting
> // the default values will both share the same ConnectionQueryServices.
> if (zookeeperQuorum == null) {
> zookeeperQuorum = props.get(QueryServices.ZOOKEEPER_QUORUM_ATTRIB);
> if (zookeeperQuorum == null) {
> throw new SQLExceptionInfo.Builder(SQLExceptionCode.MALFORMED_CONNECTION_URL)
> .setMessage(this.toString()).build().buildException();
> }
> }
> if (port == null) {
> if (!isConnectionless) {
> String portStr = props.get(QueryServices.ZOOKEEPER_PORT_ATTRIB);
> if (portStr != null) {
> try {
> port = Integer.parseInt(portStr);
> } catch (NumberFormatException e) {
> throw new SQLExceptionInfo.Builder(SQLExceptionCode.MALFORMED_CONNECTION_URL)
> .setMessage(this.toString()).build().buildException();
> }
> }
> }
> } else if (isConnectionless) {
> throw new SQLExceptionInfo.Builder(SQLExceptionCode.MALFORMED_CONNECTION_URL)
> .setMessage("Port may not be specified when using the connectionless url \"" + this.toString()
+ "\"").build().buildException();
> }
> if (rootNode == null) {
> if (!isConnectionless) {
> rootNode = props.get(QueryServices.ZOOKEEPER_ROOT_NODE_ATTRIB);
> }
> } else if (isConnectionless) {
> throw new SQLExceptionInfo.Builder(SQLExceptionCode.MALFORMED_CONNECTION_URL)
> .setMessage("Root node may not be specified when using the connectionless url \"" + this.toString()
+ "\"").build().buildException();
> }
> if (principal == null) {
> if (!isConnectionless) {
> principal = props.get(QueryServices.HBASE_CLIENT_PRINCIPAL);
> }
> }
> if (keytab == null) {
> if (!isConnectionless) {
> keytab = props.get(QueryServices.HBASE_CLIENT_KEYTAB);
> }
> }
> if (!isConnectionless()) {
> boolean credsProvidedInUrl = null != principal && null != keytab;
> boolean credsProvidedInProps = info.containsKey(QueryServices.HBASE_CLIENT_PRINCIPAL)
&& info.containsKey(QueryServices.HBASE_CLIENT_KEYTAB);
> if (credsProvidedInUrl || credsProvidedInProps) {
> // PHOENIX-3189 Because ConnectionInfo is immutable, we must make sure all parts of it
are correct before
> // construction; this also requires the Kerberos user credentials object (since they
are compared by reference
> // and not by value. If the user provided a principal and keytab via the JDBC url, we
must make sure that the
> // Kerberos login happens *before* we construct the ConnectionInfo object. Otherwise,
the use of ConnectionInfo
> // to determine when ConnectionQueryServices impl's should be reused will be broken.
> try {
> // Check if we need to authenticate with kerberos so that we cache the correct ConnectionInfo
> UserGroupInformation currentUser = UserGroupInformation.getCurrentUser();
> if (!currentUser.hasKerberosCredentials() || !isSameName(currentUser.getUserName(), principal))
{
> synchronized (KERBEROS_LOGIN_LOCK) {
> // Double check the current user, might have changed since we checked last. Don't want
> // to re-login if it's the same user.
> currentUser = UserGroupInformation.getCurrentUser();
> if (!currentUser.hasKerberosCredentials() || !isSameName(currentUser.getUserName(), principal))
{
> final Configuration config = getConfiguration(props, info, principal, keytab);
> logger.info("Trying to connect to a secure cluster as {} with keytab {}", config.get(QueryServices.HBASE_CLIENT_PRINCIPAL),
> config.get(QueryServices.HBASE_CLIENT_KEYTAB));
> UserGroupInformation.setConfiguration(config);
> User.login(config, QueryServices.HBASE_CLIENT_KEYTAB, QueryServices.HBASE_CLIENT_PRINCIPAL,
null);
> logger.info("tmp==ugi user is{},auth is{}" ,UserGroupInformation.getCurrentUser().getUserName(),UserGroupInformation.getCurrentUser().getAuthenticationMethod());
> logger.info("tmp==ugi login user is{},auth is{}" ,UserGroupInformation.getLoginUser().getUserName(),UserGroupInformation.getLoginUser().getAuthenticationMethod());
> logger.info("Successful login to secure cluster");
> }
> }
> } else {
> // The user already has Kerberos creds, so there isn't anything to change in the ConnectionInfo.
> logger.debug("Already logged in as {}", currentUser);
> }
> } catch (IOException e) {
> throw new SQLExceptionInfo.Builder(SQLExceptionCode.CANNOT_ESTABLISH_CONNECTION)
> .setRootCause(e).build().buildException();
> }
> } else {
> logger.debug("Principal and keytab not provided, not attempting Kerberos login");
> }
> } // else, no connection, no need to login
> // Will use the current User from UGI
> return new ConnectionInfo(zookeeperQuorum, port, rootNode, principal, keytab);
> }
>  
> {code}
> So I always get the following exceptions:
> {code:java}
> 19/03/14 22:11:11 DEBUG ClientCnxn: Reading reply sessionid:0x26975f6aaa9056d, packet::
clientPath:/hbase-secure/meta-region-server serverPath:/hbase-secure/meta-region-server finished:false
header:: 9,4 replyHeader:: 9,34359750201,0 request:: '/hbase-secure/meta-region-server,F response::
#ffffffff000146d61737465723a3136303030ffffffa1dffffffbafffffff043fffffff53d7c50425546a21a15746573742d646d70342e66656e676461692e6f726710ffffff947d18ffffffe5fffffff6ffffffc1ffffffb0ffffff972d100183,s{8589935420,34359739604,1543999435222,1552464056849,257,0,0,0,68,0,8589935420}
> 19/03/14 22:11:11 DEBUG AbstractHBaseSaslRpcClient: Creating SASL GSSAPI client. Server's
Kerberos principal name is hbase/test-dmp4.fengdai.org@TESTDIP.ORG
> 19/03/14 22:11:11 DEBUG UserGroupInformation: PrivilegedAction as:dmp (auth:SIMPLE) from:org.apache.hadoop.hbase.security.NettyHBaseSaslRpcClientHandler.handlerAdded(NettyHBaseSaslRpcClientHandler.java:106)
> 19/03/14 22:11:11 DEBUG UserGroupInformation: PrivilegedActionException as:dmp (auth:SIMPLE)
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid
credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
> 19/03/14 22:11:11 DEBUG RpcRetryingCallerImpl: Call exception, tries=7, retries=7, started=11862
ms ago, cancelled=false, msg=Call to test-dmp4.fengdai.org/10.200.162.25:16020 failed on local
exception: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException:
No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)], details=row
'SYSTEM:CATALOG' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=test-dmp4.fengdai.org,16020,1552463985509,
seqNum=-1, exception=java.io.IOException: Call to test-dmp4.fengdai.org/10.200.162.25:16020
failed on local exception: javax.security.sasl.SaslException: GSS initiate failed [Caused
by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos
tgt)]
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
> at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
> at org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:92)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307)
> at org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.userEventTriggered(ChannelInboundHandlerAdapter.java:108)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307)
> at org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.userEventTriggered(ChannelInboundHandlerAdapter.java:108)
> at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.userEventTriggered(ByteToMessageDecoder.java:353)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307)
> at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1377)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
> at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
> at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:929)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:179)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$500(NettyRpcConnection.java:71)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection$2.operationComplete(NettyRpcConnection.java:247)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:163)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection.saslNegotiate(NettyRpcConnection.java:201)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$800(NettyRpcConnection.java:71)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:273)
> at org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:261)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
> at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
> at org.apache.hbase.thir
> {code}
> I uploaded a full debug log. Can anyone write a suggestion for me?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message