hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration
Date Sat, 10 Nov 2018 02:10:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16682145#comment-16682145
] 

Konstantin Shvachko commented on HDFS-14017:
--------------------------------------------

Had an offline discussion with Erik. The gist of the problem is that the virtual address of
the NameName in current IFPP comes from the namespaceID, and it doesn't look further into
physical addresses of the NameNodes. So if we want to keep this behavior, we should do the
same for ORPPWithIPF.
I suggest that we double-check this is the case. If so, Erik says it is, let's go this route,
even though it seems hacky, but let's finally document this properly.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -------------------------------------------------------------------------
>
>                 Key: HDFS-14017
>                 URL: https://issues.apache.org/jira/browse/HDFS-14017
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Chen Liang
>            Assignee: Chen Liang
>            Priority: Major
>         Attachments: HDFS-14017-HDFS-12943.001.patch, HDFS-14017-HDFS-12943.002.patch,
HDFS-14017-HDFS-12943.003.patch, HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch,
HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends {{ObserverReadProxyProvider}},
and the only difference is changing the proxy factory to use {{IPFailoverProxyProvider}}.
However this is not enough because when calling constructor of {{ObserverReadProxyProvider}}
in super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
>         HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured failover. But in
the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in current IPFailover
setup, say we have a virtual host nn.xyz.com, which points to either of two physical nodes
nn1.xyz.com or nn2.xyz.com. In current HDFS, there is always only one DT being exchanged,
which has hostname nn.xyz.com. Server only issues this DT, and client only knows the host
nn.xyz.com, so all is good. But in Observer read, even with IPFailover, the client will no
longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and nn2.xyz.com.
During this process, current code will look for DT associated with hostname nn1.xyz.com or nn2.xyz.com,
which is different from the DT given by NN. causing Token authentication to fail. This happens
in {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy provider will need
to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message