From user-return-38-apmail-hadoop-user-archive=hadoop.apache.org@hadoop.apache.org Wed Aug 8 10:48:24 2012 Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B459A9BBF for ; Wed, 8 Aug 2012 10:48:24 +0000 (UTC) Received: (qmail 46920 invoked by uid 500); 8 Aug 2012 10:48:19 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 46725 invoked by uid 500); 8 Aug 2012 10:48:19 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 46702 invoked by uid 99); 8 Aug 2012 10:48:18 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Aug 2012 10:48:18 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of michael_segel@hotmail.com designates 65.55.111.94 as permitted sender) Received: from [65.55.111.94] (HELO blu0-omc2-s19.blu0.hotmail.com) (65.55.111.94) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Aug 2012 10:48:11 +0000 Received: from BLU0-SMTP262 ([65.55.111.71]) by blu0-omc2-s19.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 8 Aug 2012 03:47:50 -0700 X-Originating-IP: [173.15.87.37] X-EIP: [/4cxZzV9RltbZw5+CC8lUklaBSZDlHFe] X-Originating-Email: [michael_segel@hotmail.com] Message-ID: Received: from [192.168.0.101] ([173.15.87.37]) by BLU0-SMTP262.blu0.hotmail.com over TLS secured channel with Microsoft SMTPSVC(6.0.3790.4675); Wed, 8 Aug 2012 03:47:48 -0700 References: In-Reply-To: MIME-Version: 1.0 (1.0) Content-Transfer-Encoding: 7bit Content-Type: multipart/alternative; boundary="Apple-Mail-CEEA87C3-8261-419A-A385-1B79E358EE44" CC: "user@hadoop.apache.org" X-Mailer: iPad Mail (9B206) From: Michel Segel Subject: Re: datanode startup before hostname is resovable Date: Wed, 8 Aug 2012 05:47:46 -0500 To: "user@hadoop.apache.org" X-OriginalArrivalTime: 08 Aug 2012 10:47:48.0894 (UTC) FILETIME=[40870BE0:01CD7553] --Apple-Mail-CEEA87C3-8261-419A-A385-1B79E358EE44 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" So you're running a pseudo cluster... Take out the boot up starting of the cluster and start the cluster manually.= Even w DHCP, you shouldn't always get a new ip address because your lease sh= ouldn't expire that quickly...=20 Manually start Hadoop... Sent from a remote device. Please excuse any typos... Mike Segel On Aug 8, 2012, at 2:43 AM, Alan Miller wrote: > Sure but like I said, I=E2=80=99m on DHCP so my IP always changes. > =20 > In my config files I tried using =E2=80=9Clocalhost4=E2=80=9D and =E2=80=9C= 127.0.0.1=E2=80=9D but in > both cases it still uses my FQ hostname instead of 127.0.0.1 > E.g.: > STARTUP_MSG: host =3D myhostname.mycompany.com/10.11.12.13 > STARTUP_MSG: args =3D [] > STARTUP_MSG: version =3D 2.0.0-cdh4.0.1 > =20 > From: /etc/hadoop/conf/core-site.xml > > fs.default.name > hdfs://localhost4:8020 > =20 > From: /etc/hadoop/conf/mapred-site.xml > > mapred.job.tracker > localhost4:8021 > > =20 > Alan > From: Chandra Mohan, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.c= om]=20 > Sent: Wednesday, August 08, 2012 9:19 AM > To: user@hadoop.apache.org > Subject: RE: datanode startup before hostname is resovable > =20 > I had a similar problem under different circumstances. I added the hostnam= e and ip in /etc/hosts file > =20 > From: Alan Miller [mailto:Alan.Miller@synopsys.com]=20 > Sent: Wednesday, August 08, 2012 12:32 PM > To: user@hadoop.apache.org > Subject: datanode startup before hostname is resovable > =20 > For development I run CDH4 on my local machine but I notice that I have t= o > manually start the datanode (sudo service hadoop-hdfs-datanode start) > after each reboot. > =20 > Looks like the datanode process is getting started before my DHCP address I= s resolvable. > From: /var/log/hadoop-hdfs/hadoop-hdfs-datanode-myhost.log > =20 > 2012-08-08 08:44:01,171 INFO org.apache.hadoop.hdfs.server.datanode.Da= taNode: STARTUP_MSG: > =E2=80=A6. > STARTUP_MSG: Starting DataNode > STARTUP_MSG: host =3D java.net.UnknownHostException: myhostname: myh= ostname > =E2=80=A6. > 2012-08-08 08:44:02,253 ERROR org.apache.hadoop.hdfs.server.datanode.D= ataNode: Exception in secureMain java.net.UnknownHostException: myhostname: m= yhostname > SHUTDOWN_MSG: Shutting down DataNode at java.net.UnknownHostException:= myhostname: myhostname > =20 > =20 > I=E2=80=99m on Fedora 16/x86_64. > =20 > Regards, > Alan > =20 --Apple-Mail-CEEA87C3-8261-419A-A385-1B79E358EE44 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset="utf-8"
So you're running a pseudo= cluster...

Take out the boot up starting of the cl= uster and start the cluster manually.
Even w DHCP, you shouldn't a= lways get a new ip address because your lease shouldn't expire that quickly.= .. 

Manually start Hadoop...

Sent from a remote device. Please excuse any typos...

Mi= ke Segel

On Aug 8, 2012, at 2:43 AM, Alan Miller <Alan.Miller@synopsys.com> wro= te:

Sure but like I said, I= =E2=80=99m on DHCP so my IP always changes.

 

In my config files I tr= ied using =E2=80=9Clocalhost4=E2=80=9D and =E2=80=9C127.0.0.1=E2=80=9D but i= n

both cases it still use= s my FQ hostname instead of 127.0.0.1

E.g.:=

  STARTUP_MSG:&nbs= p;  host =3D my= hostname.mycompany.com/10.11.12.13

  STARTUP_MSG:&nbs= p;  args =3D []

  STARTUP_MSG:&nbs= p;  version =3D 2.0.0-cdh4.0.1

 

From: /etc/hadoop/conf/= core-site.xml

  <property>=

    <= name>fs.default.name</name>

    <= value>hdfs://localhost4:8020</value>

  </property

 

From: /etc/hadoop/conf/= mapred-site.xml

  <property>=

    <= name>mapred.job.tracker</name>

    <= value>localhost4:8021</value>

  </property>= ;

 

Alan<= /p>

From: Chandra Moh= an, Ananda Vel Murugan [mailto:Ananda.Murugan@honeywell.com]
Sent: Wednesday, August 08, 2012 9:19 AM
To: user@hadoop.apache.org<= /a>
Subject: RE: datanode startup before hostname is resovable=

 

I had a similar problem un= der different circumstances. I added the hostname and ip in /etc/hosts file<= o:p>

 

 

For development I run CDH4 on my local machine  = but I notice that I have to

manually start the datanode (sudo service hadoop-hdfs= -datanode start)

after each reboot.

 

Looks like the datanode process is getting started be= fore my DHCP address Is resolvable.

From:  /var/log/hadoop-hdfs/hadoop-hdfs-datanode= -myhost.log

 

    2012-08-08 08:44:01,171 INFO o= rg.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:

    =E2=80=A6.

    STARTUP_MSG: Starting DataNode

    STARTUP_MSG:   host =3D j= ava.net.UnknownHostException: myhostname: myhostname

    =E2=80=A6.

    2012-08-08 08:44:02,253 ERROR org.= apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.ne= t.UnknownHostException: myhostname: myhostname

    SHUTDOWN_MSG: Shutting down DataNo= de at java.net.UnknownHostException: myhostname: myhostname

 

 

I=E2=80=99m on Fedora 16/x86_64.

 

Regards,

Alan

 

= --Apple-Mail-CEEA87C3-8261-419A-A385-1B79E358EE44--