Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id ABD8D10E62 for ; Tue, 28 Jan 2014 23:19:23 +0000 (UTC) Received: (qmail 35497 invoked by uid 500); 28 Jan 2014 23:19:13 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 35363 invoked by uid 500); 28 Jan 2014 23:19:11 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 35245 invoked by uid 99); 28 Jan 2014 23:19:09 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Jan 2014 23:19:09 +0000 Date: Tue, 28 Jan 2014 23:19:09 +0000 (UTC) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13884892#comment-13884892 ] Hadoop QA commented on HDFS-5804: --------------------------------- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12625685/HDFS-5804.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/5966//testReport/ Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5966//console This message is automatically generated. > HDFS NFS Gateway fails to mount and proxy when using Kerberos > ------------------------------------------------------------- > > Key: HDFS-5804 > URL: https://issues.apache.org/jira/browse/HDFS-5804 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: nfs > Affects Versions: 3.0.0, 2.2.0 > Reporter: Abin Shahab > Attachments: HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log, javadoc-after-patch.log, javadoc-before-patch.log > > > When using HDFS nfs gateway with secure hadoop (hadoop.security.authentication: kerberos), mounting hdfs fails. > Additionally, there is no mechanism to support proxy user(nfs needs to proxy as the user invoking commands on the hdfs mount). > Steps to reproduce: > 1) start a hadoop cluster with kerberos enabled. > 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has a an account in kerberos. > 3) Get the keytab for nfsserver, and issue the following mount command: mount -t nfs -o vers=3,proto=tcp,nolock $server:/ $mount_point > 4) You'll see in the nfsserver logs that Kerberos is complaining about not having a TGT for root. > This is the stacktrace: > java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "my-nfs-server-host.com/10.252.4.197"; destination host is: "my-namenode-host.com":8020; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764) > at org.apache.hadoop.ipc.Client.call(Client.java:1351) > at org.apache.hadoop.ipc.Client.call(Client.java:1300) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source) > at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664) > at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713) > at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58) > at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79) > at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643) > at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891) > at org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143) > at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281) > at org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132) > at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) > at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) > at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) > at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) > at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) > at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) > at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) > at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107) > at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) > at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) > at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) > at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) > at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] > at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:620) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) > at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:583) > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:667) > at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399) > at org.apache.hadoop.ipc.Client.call(Client.java:1318) > ... 43 more > Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] > at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170) > at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:387) > at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:494) > at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:314) > at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:659) > at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:655) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:654) -- This message was sent by Atlassian JIRA (v6.1.5#6160)