hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Norbert Burger <norbert.bur...@gmail.com>
Subject Re: Namenode not listening for remote connections to port 9000
Date Fri, 13 Feb 2009 13:50:56 GMT
On Fri, Feb 13, 2009 at 8:37 AM, Steve Loughran <stevel@apache.org> wrote:

> Michael Lynch wrote:
>> Hi,
>> As far as I can tell I've followed the setup instructions for a hadoop
>> cluster to the letter,
>> but I find that the datanodes can't connect to the namenode on port 9000
>> because it is only
>> listening for connections from localhost.
>> In my case, the namenode is called centos1, and the datanode is called
>> centos2. They are
>> centos 5.1 servers with an unmodified sun java 6 runtime.
> fs.default.name takes a URL to the filesystem. such as
> hdfs://centos1:9000/
> If the machine is only binding to localhost, that may mean DNS fun. Try a
> fully qualified name instead

(fs.default.name is defined in conf/hadoop-site.xml, overriding entries from

Also, check your /etc/hosts file on both machines.  Could be that you have a
incorrect setup where both localhost and the namenode hostname (centos1) are
aliased to


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message