hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hiller, Dean (Contractor)" <dean.hil...@broadridge.com>
Subject decommision nodes not working(localhost vs. ips in website too)
Date Tue, 04 Jan 2011 01:03:00 GMT
Luckily I am in dev so not a biggie, but datanode seems to be reading
from /etc/hosts(ie. Java calls to InetAddress.getLocalHost return instead of the ip) when displaying the name of the live nodes.
When displaying the name fo the dead nodes however, it displays the
hostname in my slaves and excluded file.


I wonder why the hadoop script doesn't pass the FQDN from the slaves
file to the slave node upon start so there is no lookup of /etc/hosts
AND they could then bind to the correct FQDN as well if it wanted to I


Anyways, my dead node shows up in my live nodes list(as localhost which
it's not but with correct ip) and is not going to a decommissioned
state.  Is there any way to solve this? 


I read my /etc/hosts file is supposed to be localhost
localhost.localdomain but to get the hostname to display correctly, I
need something more like <FQDN> <hostname> instead as then I
know it would display properly there....and I may even have to change
the to <ip> as InetAddress.getLocalHost returns whatever is in
/etc/hosts on any linux system I have ever been on (ubuntu, centos at


Any way to fix this???




This message and any attachments are intended only for the use of the addressee and
may contain information that is privileged and confidential. If the reader of the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

View raw message