hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron T. Myers (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5804) HDFS NFS Gateway fails to mount and proxy when using Kerberos
Date Thu, 06 Feb 2014 18:40:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13893637#comment-13893637
] 

Aaron T. Myers commented on HDFS-5804:
--------------------------------------

bq. On #1, The NFS gateway logs in as a manual hdfs client. By manual, I mean, it acts right
now as a human user. The human user has to first get the tgt for the appropriate account,
and then issue the hdfs commands. The current NFS gateway does the same.
bq. If I understand you correctly, the NFS gateway should be able to get it's own tgts, and
renew them(just like the namenode and other hadoop nodes can). We plan to add that functionality
soon.

Yes, you understand my point correctly. Without this functionality this patch is not very
robust. In a production environment the NFS gateway will typically be started at boot by init
scripts, so there is no opportunity to run `kinit' beforehand. Also, if using a lcoal FS ticket
cache based login, the ticket will need to be periodically renewed every few hours, so the
user would have to write a script or something to periodically run `kinit'. This approach
also has issues because ticket renewal via a local FS ticket cache is not atomic, so a busy
NFS gateway will have problems during renewal.

bq. On #2, I completely agree. We should update the HdfsNfsGateway.apt.vm. I will post a patch
soon.

Thanks. I also strongly suspect that in most deployments the NFS gateway will be running as
the same user as the NN, which will therefore make it the HDFS superuser. I think we should
also seriously consider making the HDFS superuser capable of proxying all users by default,
which would mean that most deployments would not need to manually configure the NFS gateway
user as a proxyuser.

I recommend we file a new JIRA to address both of the above issues ASAP. I'd be happy to review
it.

> HDFS NFS Gateway fails to mount and proxy when using Kerberos
> -------------------------------------------------------------
>
>                 Key: HDFS-5804
>                 URL: https://issues.apache.org/jira/browse/HDFS-5804
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: nfs
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Abin Shahab
>            Assignee: Abin Shahab
>             Fix For: 3.0.0, 2.4.0
>
>         Attachments: HDFS-5804-documentation.patch, HDFS-5804.patch, HDFS-5804.patch,
HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, HDFS-5804.patch, exception-as-root.log,
javadoc-after-patch.log, javadoc-before-patch.log
>
>
> When using HDFS nfs gateway with secure hadoop (hadoop.security.authentication: kerberos),
mounting hdfs fails. 
> Additionally, there is no mechanism to support proxy user(nfs needs to proxy as the user
invoking commands on the hdfs mount).
> Steps to reproduce:
> 1) start a hadoop cluster with kerberos enabled.
> 2) sudo su -l nfsserver and start an nfs server. This 'nfsserver' account has a an account
in kerberos.
> 3) Get the keytab for nfsserver, and issue the following mount command: mount -t nfs
-o vers=3,proto=tcp,nolock $server:/  $mount_point
> 4) You'll see in the nfsserver logs that Kerberos is complaining about not having a TGT
for root.
> This is the stacktrace: 
> java.io.IOException: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException:
Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "my-nfs-server-host.com/10.252.4.197";
destination host is: "my-namenode-host.com":8020; 
> 	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1351)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> 	at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source)
> 	at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> 	at com.sun.proxy.$Proxy9.getFileLinkInfo(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileLinkInfo(ClientNamenodeProtocolTranslatorPB.java:664)
> 	at org.apache.hadoop.hdfs.DFSClient.getFileLinkInfo(DFSClient.java:1713)
> 	at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileStatus(Nfs3Utils.java:58)
> 	at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3Utils.getFileAttr(Nfs3Utils.java:79)
> 	at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.fsinfo(RpcProgramNfs3.java:1643)
> 	at org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3.handleInternal(RpcProgramNfs3.java:1891)
> 	at org.apache.hadoop.oncrpc.RpcProgram.messageReceived(RpcProgram.java:143)
> 	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
> 	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
> 	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:281)
> 	at org.apache.hadoop.oncrpc.RpcUtil$RpcMessageParserStage.messageReceived(RpcUtil.java:132)
> 	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
> 	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)
> 	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
> 	at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
> 	at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
> 	at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
> 	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> 	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)
> 	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)
> 	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> 	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
> 	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
> 	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)
> 	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
> 	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)
> 	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> 	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
> 	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client
cannot authenticate via:[TOKEN, KERBEROS]
> 	at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:620)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> 	at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:583)
> 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:667)
> 	at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
> 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1318)
> 	... 43 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client cannot authenticate
via:[TOKEN, KERBEROS]
> 	at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:170)
> 	at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:387)
> 	at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:494)
> 	at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:314)
> 	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:659)
> 	at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:655)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:654)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message