hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vishnu Viswanath <vishnu.viswanat...@gmail.com>
Subject DataNode not starting in slave machine
Date Wed, 25 Dec 2013 13:31:50 GMT
Hi,

I am getting this error while starting the datanode in my slave system.

I read the JIRA HDFS-2515 <https://issues.apache.org/jira/browse/HDFS-2515>,
it says it is because hadoop is using wrong conf file.

13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at
10 second(s).
13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
started
13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
registered.
13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
exists!
13/12/24 15:57:15 ERROR datanode.DataNode:
java.lang.IllegalArgumentException: Does not contain a valid host:port
authority: file:///
    at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
    at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)

But how do i check which conf file hadoop is using? or how do i set it?

These are my configurations:

core-site.xml
------------------
<configuration>
    <property>
        <name>fs.defualt.name</name>
        <value>hdfs://master:9000</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/vishnu/hadoop-tmp</value>
    </property>
</configuration>

hdfs-site.xml
--------------------
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

mared-site.xml
--------------------
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
</configuration>

any help,

Mime
View raw message