hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vamsi Krishna <vamsi.attl...@gmail.com>
Subject Namenode automatic failover - how to handle WebHDFS URL?
Date Wed, 08 Jun 2016 17:35:34 GMT

How to handle WebHDFS URL in case of Namenode automatic failover in HA HDFS



When working with HDFS CLI replacing the ‘<HOST>:<RPC_PORT>’ with ‘
DFS.NAMESERVICES’ (from hdfs-site.xml) value in the HDFS URI is fetching me
the same result as with ‘<HOST>:<RPC_PORT>’.

By using the ‘DFS.NAMESERVICES’ in the HDFS URI I do not need to change my
HDFS CLI commands in case of Namenode automatic failover.


hdfs dfs -ls hdfs://<HOST>:<RPC_PORT>/<PATH>

hdfs dfs -ls hdfs://<DFS.NAMESERVICES>/<PATH>


WebHDFS URL: http://<HOST>:<HTTP_PORT>/webhdfs/v1/<PATH>?op=...

Is there a way to frame the WebHDFS URL so that we don’t have to change the
URL (host) in case of Namenode automatic failover (failover from namenode-1
to namenode-2)?


I have a web application which uses WebHDFS HTTP request to read data files
from Hadoop cluster.
I would like to know if there is a way to make the web application work
without any downtime in case of Namenode automatic failover (failover from
namenode-1 to namenode-2)

Vamsi Attluri
Vamsi Attluri

View raw message