hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From felix gao <gre1...@gmail.com>
Subject Re: Configure NameNode to accept connection from external ips
Date Mon, 31 Jan 2011 21:54:41 GMT
I am trying to create a client that talks to hdfs and I am having the
following problems

ipc.Client: Retrying connect to server:
hm01.xxx.xxx.com/xx.xxx.xxx.176:50001. Already tried 0 time(s).

The hm01 is running namenode and tasktracker if connecting to it with
internal ip ranges from 192.168.100.1 to 192.168.100.255. However, my client
sits in a complete different network.  What do I need to configure to make
the namenode serving my client that initiates requests from different
network.

here is the core-site.xml is configured for namenode on my client
<property>
                <name>fs.default.name</name>
                <value>hdfs://hm01.xxx.xxx.com:50001</value>
  </property>

Thanks,

Felix



On Tue, Jan 25, 2011 at 2:24 PM, felix gao <gre1600@gmail.com> wrote:

> Hi guys,
>
> I have a small cluster that each machine have two NICs one is configured
> with external IP and another is configured with internal IP.  Right now all
> the machines are communicating with each other via the internal IP.  I want
> to configure the namenode to also accept connection via its external ip
> (white listed IPs).  I am not sure how to do that.  I have a copy of the
> slaves's conf files in my local computer that sits outside of the cluster
> network and when I do hadoop fs -ls /user it does not connect to HDFS.
>
> Thanks,
>
> Felix
>

Mime
View raw message