hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Azuryy <azury...@gmail.com>
Subject Re: DataNode not starting in slave machine
Date Wed, 25 Dec 2013 14:20:12 GMT
Did you add master in the hosts?

Sent from my iPhone5s

> On 2013年12月25日, at 22:11, Vishnu Viswanath <vishnu.viswanath25@gmail.com>
wrote:
> 
> Made that change . Still the same error.
> 
> And why should fs.default.name set to file:/// ?  I am not running in pseudo-distributed
mode. I am having two systems one is master and the other is slave.
> 
> Vishnu Viswanath
> 
>> On 25-Dec-2013, at 19:35, kishore alajangi <alajangikishore@gmail.com> wrote:
>> 
>> Replace hdfs:// to file:/// in fs.default.name property.
>> 
>> 
>>> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <vishnu.viswanath25@gmail.com>
wrote:
>>> Hi,
>>> 
>>> I am getting this error while starting the datanode in my slave system.
>>> 
>>> I read the JIRA HDFS-2515, it says it is because hadoop is using wrong conf file.

>>> 
>>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
>>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats
registered.
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10
second(s).
>>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system started
>>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi registered.
>>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already exists!
>>> 13/12/24 15:57:15 ERROR datanode.DataNode: java.lang.IllegalArgumentException:
Does not contain a valid host:port authority: file:///
>>>     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>>>     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>>>     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>>>     at org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>>>     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>> 
>>> But how do i check which conf file hadoop is using? or how do i set it?
>>> 
>>> These are my configurations:
>>> 
>>> core-site.xml
>>> ------------------
>>> <configuration>
>>>     <property>
>>>         <name>fs.defualt.name</name>
>>>         <value>hdfs://master:9000</value>
>>>     </property>
>>> 
>>>     <property>
>>>         <name>hadoop.tmp.dir</name>
>>>         <value>/home/vishnu/hadoop-tmp</value>
>>>     </property>
>>> </configuration>
>>> 
>>> hdfs-site.xml
>>> --------------------
>>> <configuration>
>>>     <property>
>>>         <name>dfs.replication</name>
>>>         <value>2</value>
>>>     </property>
>>> </configuration>
>>> 
>>> mared-site.xml
>>> --------------------
>>> <configuration>
>>>     <property>
>>>         <name>mapred.job.tracker</name>
>>>         <value>master:9001</value>
>>>     </property>
>>> </configuration>
>>> 
>>> any help,
>> 
>> 
>> 
>> -- 
>> Thanks,
>> Kishore.

Mime
View raw message