hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhe Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-13206) Delegation token cannot be fetched and used by different versions of client
Date Tue, 19 Jul 2016 23:17:20 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385049#comment-15385049
] 

Zhe Zhang commented on HADOOP-13206:
------------------------------------

I did more debugging and found the reason why different version of client return different
formats of {{service}}.

In *trunk*, {{WebHdfsFileSystem#getDelegationToken}} sets {{service}} as:
{code}
    if (token != null) {
      token.setService(tokenServiceName);
{code}

{{tokenServiceName}} is set as following:
{code}
    this.tokenServiceName = isLogicalUri ?
        HAUtilClient.buildTokenServiceForLogicalUri(uri, getScheme())
        : SecurityUtil.buildTokenService(getCanonicalUri());
{code}

This essentially will create a logical URI like {{webhdfs://myhost}}.

In *branch-2.3*, the logic is as below, which results in numerical IPs.
{code}
SecurityUtil.setTokenService(token, getCurrentNNAddr());
...
this.nnAddrs = DFSUtil.resolveWebHdfsUri(this.uri, conf);
...
  /**
   * Resolve an HDFS URL into real INetSocketAddress. It works like a DNS resolver
   * when the URL points to an non-HA cluster. When the URL points to an HA
   * cluster, the resolver further resolves the logical name (i.e., the authority
   * in the URL) into real namenode addresses.
   */
  public static InetSocketAddress[] resolveWebHdfsUri(URI uri, Configuration conf)
      throws IOException {
    int defaultPort;
    String scheme = uri.getScheme();
    if (WebHdfsFileSystem.SCHEME.equals(scheme)) {
      defaultPort = DFSConfigKeys.DFS_NAMENODE_HTTP_PORT_DEFAULT;
    } else if (SWebHdfsFileSystem.SCHEME.equals(scheme)) {
      defaultPort = DFSConfigKeys.DFS_NAMENODE_HTTPS_PORT_DEFAULT;
    } else {
      throw new IllegalArgumentException("Unsupported scheme: " + scheme);
    }

    ArrayList<InetSocketAddress> ret = new ArrayList<InetSocketAddress>();

    if (!HAUtil.isLogicalUri(conf, uri)) {
      InetSocketAddress addr = NetUtils.createSocketAddr(uri.getAuthority(),
          defaultPort);
      ret.add(addr);

    } else {
      Map<String, Map<String, InetSocketAddress>> addresses = DFSUtil
          .getHaNnWebHdfsAddresses(conf, scheme);

      for (Map<String, InetSocketAddress> addrs : addresses.values()) {
        for (InetSocketAddress addr : addrs.values()) {
          ret.add(addr);
        }
      }
    }

    InetSocketAddress[] r = new InetSocketAddress[ret.size()];
    return ret.toArray(r);
{code}

It's hard to add a unit test because we can't emulate a version 2.3 client in trunk code.
But hope the above explanation is clear enough.

> Delegation token cannot be fetched and used by different versions of client
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-13206
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13206
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.3.0, 2.6.1
>            Reporter: Zhe Zhang
>            Assignee: Zhe Zhang
>         Attachments: HADOOP-13206.00.patch, HADOOP-13206.01.patch, HADOOP-13206.02.patch
>
>
> We have observed that an HDFS delegation token fetched by a 2.3.0 client cannot be used
by a 2.6.1 client, and vice versa. Through some debugging I found that it's a mismatch between
the token's {{service}} and the {{service}} of the filesystem (e.g. {{webhdfs://host.something.com:50070/}}).
One would be in numerical IP address and one would be in non-numerical hostname format.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message