hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lukavsk√Ĺ <jan.lukav...@firma.seznam.cz>
Subject HftpFileSystem is not working with HighAvailability configuration
Date Mon, 04 Aug 2014 11:31:21 GMT
Hi all,

I think there is an issue in cooperation of HftpFileSystem (hftp://) and 
HDFS High Availability. The read might fail in the following scenario:

  * a cluster is configured in HA mode, with the following configuration:
    <property>
     <name>dfs.nameservices</name>
     <value>master</value>
   </property>
   <property>
     <name>dfs.ha.namenodes.master</name>
     <value>master1,master2</value>
   </property>
   ...

  * 'master1' is set to standby and 'master2' to active
  * the following command gives error
   $ hadoop fs -ls hftp://master/
   ls: Operation category READ is not supported in state standby
  * while the following succeeds:
   $ hadoop fs -ls hdfs://master/
   Found 98 items
   ...


I have not checked the code, but I have a suspicion, that the 
HftpFileSystem takes always the first host, or it doesn't handle the 
thrown exception correctly.
Is this a known issue? Should the error be handled in some kind of 
wrapper in client code, or is there some other workaround? Should this 
be fixed in the HftpFileSystem (somehow)?

Thanks for opinions,
  Jan


Mime
View raw message