Actually (from day to day) I don’t get a NEW IP address.
The OS just can’t resolve the hostname when the DataNode starts up.
NameNode, JobTracker & TaskTracker services all start successfully as is.
> Take out the boot up starting of the cluster and start the cluster manually.
Sorry, but that’s a silly suggestion as it’s worse than what I do now.
As soon as I login, the first thing I do is sudo hadoop-hdfs-datanode start
and I’m ready to go.
I think all we need to do is postpone the datanode startup for a few seconds.
So you're running a pseudo cluster...
Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly...
Manually start Hadoop...
Sent from a remote device. Please excuse any typos...
On Aug 8, 2012, at 2:43 AM, Alan Miller <Alan.Miller@synopsys.com> wrote:
Sure but like I said, I’m on DHCP so my IP always changes.
In my config files I tried using “localhost4” and “127.0.0.1” but in
both cases it still uses my FQ hostname instead of 127.0.0.1
STARTUP_MSG: host = myhostname.mycompany.com/10.11.12.13
STARTUP_MSG: args = 
STARTUP_MSG: version = 2.0.0-cdh4.0.1
I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file
For development I run CDH4 on my local machine but I notice that I have to
manually start the datanode (sudo service hadoop-hdfs-datanode start)
after each reboot.
Looks like the datanode process is getting started before my DHCP address Is resolvable.
2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = java.net.UnknownHostException: myhostname: myhostname
2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.net.UnknownHostException: myhostname: myhostname
SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException: myhostname: myhostname
I’m on Fedora 16/x86_64.