hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rushabh S Shah (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12285) Better handling of namenode ip address change
Date Thu, 10 Aug 2017 13:27:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16121607#comment-16121607
] 

Rushabh S Shah commented on HDFS-12285:
---------------------------------------

Is this somehow related to HADOOP-12125 or HDFS-8068 ?

> Better handling of namenode ip address change
> ---------------------------------------------
>
>                 Key: HDFS-12285
>                 URL: https://issues.apache.org/jira/browse/HDFS-12285
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Ming Ma
>
> RPC client layer provides functionality to detect ip address change:
> {noformat}
> Client.java
>     private synchronized boolean updateAddress() throws IOException {
>       // Do a fresh lookup with the old host name.
>       InetSocketAddress currentAddr = NetUtils.createSocketAddrForHost(
>                                server.getHostName(), server.getPort());
>     ......
>     }
> {noformat}
> To use this feature, we need to enable retry via {{dfs.client.retry.policy.enabled}}.
Otherwise {{TryOnceThenFail}} RetryPolicy will be used; which caused {{handleConnectionFailure}}
to throw {{ConnectException}} exception without retrying with the new ip address.
> {noformat}
>     private void handleConnectionFailure(int curRetries, IOException ioe
>         ) throws IOException {
>       closeConnection();
>       final RetryAction action;
>       try {
>         action = connectionRetryPolicy.shouldRetry(ioe, curRetries, 0, true);
>       } catch(Exception e) {
>         throw e instanceof IOException? (IOException)e: new IOException(e);
>       }
>   ......
>   }
> {noformat}
> However, using such configuration isn't ideal. What happens is DFSClient still holds
onto the cached old ip address created by {{namenode = proxyInfo.getProxy();}}. Thus when
a new rpc connection is created, it starts with the old ip followed by retry with the new
ip. It will be nice if DFSClient can update namenode proxy automatically upon ip address change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message