hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Amit Chandel" <amitchan...@gmail.com>
Subject Re: configure Secondary NameNode on a remote server
Date Fri, 29 Aug 2008 06:12:31 GMT
Hi Gerardo,

You can specify the address of the secondary name node in the conf/masters
file. It worked for me.

- Amit

On Fri, Aug 29, 2008 at 6:53 AM, Gerardo Velez <jgerardo.velez@gmail.com>wrote:

> Hi All!
>
> Due to, the NameNode is a single point of failure for the HDFS Cluster. I
> would like to configure the secondary NameNode
> on a remote server.
>
> In order to do so, I cheked out hadoop-default.xml config file and I found
> following:
>
> <property>
>  <name>dfs.secondary.http.address</name>
>  <value>0.0.0.0:50090</value>
>  <description>
>    The secondary namenode http server address and port.
>    If the port is 0 then the server will start on a free port.
>  </description>
> </property>
>
> <property>
>  <name>dfs.datanode.address</name>
>  <value>0.0.0.0:50010</value>
>  <description>
>    The address where the datanode server will listen to.
>    If the port is 0 then the server will start on a free port.
>  </description>
> </property>
>
> <property>
>  <name>dfs.datanode.http.address</name>
>  <value>0.0.0.0:50075</value>
>  <description>
>    The datanode http server address and port.
>    If the port is 0 then the server will start on a free port.
>  </description>
> </property>
>
>
> But I did not find any "<name>dfs.secondary.address</name>" so, do you have
> any idea how can I achieve it?
>
> Thanks in advance!
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message