hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vishnu Viswanath <vishnu.viswanat...@gmail.com>
Subject Re: DataNode not starting in slave machine
Date Wed, 25 Dec 2013 15:56:03 GMT
Thanks every one.

I downloaded hadoop-1.2.1 again and  set all the conf-* files and now it
worked fine.
I don't know why it didn't work in the first place, the properties that i
set now  were exactly the same i did last time.

Regards
Vishnu




On Wed, Dec 25, 2013 at 8:20 PM, Shekhar Sharma <shekhar2581@gmail.com>wrote:

> It is running on local file system file:///
>
> Regards,
> Som Shekhar Sharma
> +91-8197243810
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath
> <vishnu.viswanath25@gmail.com> wrote:
> > Hi,
> >
> > I am getting this error while starting the datanode in my slave system.
> >
> > I read the JIRA HDFS-2515, it says it is because hadoop is using wrong
> conf
> > file.
> >
> > 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at
> > 10 second(s).
> > 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> > started
> > 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> > registered.
> > 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> > exists!
> > 13/12/24 15:57:15 ERROR datanode.DataNode:
> > java.lang.IllegalArgumentException: Does not contain a valid host:port
> > authority: file:///
> >     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> >     at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> >     at
> > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
> >
> > But how do i check which conf file hadoop is using? or how do i set it?
> >
> > These are my configurations:
> >
> > core-site.xml
> > ------------------
> > <configuration>
> >     <property>
> >         <name>fs.defualt.name</name>
> >         <value>hdfs://master:9000</value>
> >     </property>
> >
> >     <property>
> >         <name>hadoop.tmp.dir</name>
> >         <value>/home/vishnu/hadoop-tmp</value>
> >     </property>
> > </configuration>
> >
> > hdfs-site.xml
> > --------------------
> > <configuration>
> >     <property>
> >         <name>dfs.replication</name>
> >         <value>2</value>
> >     </property>
> > </configuration>
> >
> > mared-site.xml
> > --------------------
> > <configuration>
> >     <property>
> >         <name>mapred.job.tracker</name>
> >         <value>master:9001</value>
> >     </property>
> > </configuration>
> >
> > any help,
> >
>

Mime
View raw message