Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A09681074E for ; Wed, 25 Dec 2013 14:05:50 +0000 (UTC) Received: (qmail 602 invoked by uid 500); 25 Dec 2013 14:05:32 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 464 invoked by uid 500); 25 Dec 2013 14:05:29 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 455 invoked by uid 99); 25 Dec 2013 14:05:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Dec 2013 14:05:27 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of alajangikishore@gmail.com designates 74.125.82.51 as permitted sender) Received: from [74.125.82.51] (HELO mail-wg0-f51.google.com) (74.125.82.51) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Dec 2013 14:05:23 +0000 Received: by mail-wg0-f51.google.com with SMTP id b13so6729845wgh.30 for ; Wed, 25 Dec 2013 06:05:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=aKuIQNLKNxVAiZ927YcrXG0VAA8vx9bU1fjlujfz6RA=; b=ON4GxbmuxY9qyvrEm5ualWpSqj1bYqSu233sWL5RxG7jpoKBEyqX0mkSmX+YL07weN i3ZdGdhfTbicGLWqu1kVJgf3/zNDhDuqhB12Iq0D3k05AfOQhfQfIW4+qfOs49GY4ExB oTqQcOMWfzNxqOCXK1DFDI/Lrd6gtZnSK4z2ytYC20b32XBPYARJ0Sq0L4lQKSTzhmjY n5kvfD2YDup3XZTwyyW+ROEU620BK5phf0jKUPf9zH5PgroPvDMHaYunYp63YbXIftuj Nr6X3TqeUmHbpk5bj/5FVYDAxEarXhM4SsDV3SGJqDSfywYksJF4twljN50mOJbAkTuG WBNw== MIME-Version: 1.0 X-Received: by 10.194.61.84 with SMTP id n20mr1489090wjr.61.1387980302112; Wed, 25 Dec 2013 06:05:02 -0800 (PST) Received: by 10.217.140.193 with HTTP; Wed, 25 Dec 2013 06:05:02 -0800 (PST) In-Reply-To: References: Date: Wed, 25 Dec 2013 19:35:02 +0530 Message-ID: Subject: Re: DataNode not starting in slave machine From: kishore alajangi To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7bacb0b4dcdc9504ee5c58ef X-Virus-Checked: Checked by ClamAV on apache.org --047d7bacb0b4dcdc9504ee5c58ef Content-Type: text/plain; charset=ISO-8859-1 Replace hdfs:// to file:/// in fs.default.name property. On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath < vishnu.viswanath25@gmail.com> wrote: > Hi, > > I am getting this error while starting the datanode in my slave system. > > I read the JIRA HDFS-2515, > it says it is because hadoop is using wrong conf file. > > 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from > hadoop-metrics2.properties > 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source > MetricsSystem,sub=Stats registered. > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period > at 10 second(s). > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system > started > 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi > registered. > 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already > exists! > 13/12/24 15:57:15 ERROR datanode.DataNode: > java.lang.IllegalArgumentException: Does not contain a valid host:port > authority: file:/// > at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:321) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812) > > But how do i check which conf file hadoop is using? or how do i set it? > > These are my configurations: > > core-site.xml > ------------------ > > > fs.defualt.name > hdfs://master:9000 > > > > hadoop.tmp.dir > /home/vishnu/hadoop-tmp > > > > hdfs-site.xml > -------------------- > > > dfs.replication > 2 > > > > mared-site.xml > -------------------- > > > mapred.job.tracker > master:9001 > > > > any help, > > -- Thanks, Kishore. --047d7bacb0b4dcdc9504ee5c58ef Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Replace hdfs:// to file:/// in fs.default.name property.

<= br>
On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswa= nath <vishnu.viswanath25@gmail.com> wrote:
Hi,

I am g= etting this error while starting the datanode in my slave system.

I read the JIRA=A0HDFS-2515, it says it is because hadoop is using wrong co= nf file.=A0

13/12/24 15:57:14 INFO= impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsS= ystem,sub=3DStats registered.
13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at= 10 second(s).
13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode m= etrics system started
13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: = MBean for source ugi registered.
13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exis= ts!
13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgument= Exception: Does not contain a valid host:port authority: file:///
=A0=A0= =A0 at=A0org.ap= ache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(Nam= eNode.java:212)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.Name= Node.getAddress(NameNode.java:244)
=A0=A0=A0 at org.apache.hadoop.hdfs.s= erver.namenode.NameNode.getServiceAddress(NameNode.java:236)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(= DataNode.java:359)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.D= ataNode.<init>(DataNode.java:321)
=A0=A0=A0 at org.apache.hadoop.h= dfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDat= aNode(DataNode.java:1651)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.dat= anode.DataNode.createDataNode(DataNode.java:1669)
=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.= java:1812)

Bu= t how do i check which conf file hadoop is using? or how do i set it?

These are my configura= tions:

=
core-s= ite.xml
------------------
<configuration>
=A0=A0=A0 <pro= perty>
=A0=A0=A0 =A0=A0=A0 <name>fs.defualt.name</name>
=A0=A0=A0 =A0=A0=A0 <value>hdfs://master:9000</value>
=A0=A0= =A0 </property>

=A0=A0=A0 <property>
=A0=A0=A0 =A0=A0= =A0 <name>hadoop.tmp.dir</name>
=A0=A0=A0 =A0=A0=A0 <valu= e>/home/vishnu/hadoop-tmp</value>
=A0=A0=A0 </property>
</configuration>

hdfs-sit= e.xml
--------------------
<configuration>
=A0=A0=A0 <pro= perty>
=A0=A0=A0 =A0=A0=A0 <name>dfs.replication</name>=A0=A0=A0 =A0=A0=A0 <value>2</value>
=A0=A0=A0 </property>
</configuration>

mared-si= te.xml
--------------------
<configuration>
=A0=A0=A0 <pr= operty>
=A0=A0=A0 =A0=A0=A0 <name>mapred.job.tracker</name&g= t;
=A0=A0=A0 =A0=A0=A0 <value>master:9001</value>
=A0=A0=A0 </property>
</configuration>

any help,




--
Thanks,
K= ishore.
--047d7bacb0b4dcdc9504ee5c58ef--