So you're running a pseudo cluster...

Take out the boot up starting of the cluster and start the cluster manually.
Even w DHCP, you shouldn't always get a new ip address because your lease shouldn't expire that quickly... 

Manually start Hadoop...

Sent from a remote device. Please excuse any typos...

Mike Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <> wrote:

Sure but like I said, I’m on DHCP so my IP always changes.


In my config files I tried using “localhost4” and “” but in

both cases it still uses my FQ hostname instead of


  STARTUP_MSG:   host =

  STARTUP_MSG:   args = []

  STARTUP_MSG:   version = 2.0.0-cdh4.0.1


From: /etc/hadoop/conf/core-site.xml






From: /etc/hadoop/conf/mapred-site.xml







From: Chandra Mohan, Ananda Vel Murugan []
Sent: Wednesday, August 08, 2012 9:19 AM
Subject: RE: datanode startup before hostname is resovable


I had a similar problem under different circumstances. I added the hostname and ip in /etc/hosts file


From: Alan Miller []
Sent: Wednesday, August 08, 2012 12:32 PM
Subject: datanode startup before hostname is resovable


For development I run CDH4 on my local machine  but I notice that I have to

manually start the datanode (sudo service hadoop-hdfs-datanode start)

after each reboot.


Looks like the datanode process is getting started before my DHCP address Is resolvable.

From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log


    2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:


    STARTUP_MSG: Starting DataNode

    STARTUP_MSG:   host = myhostname: myhostname


    2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain myhostname: myhostname

    SHUTDOWN_MSG: Shutting down DataNode at myhostname: myhostname



I’m on Fedora 16/x86_64.