hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "chaitanya krishna" <chaitanyavv.ii...@gmail.com>
Subject Re: what are the issues to be taken care of when the ip(s) of nodes are changed
Date Mon, 26 May 2008 06:56:05 GMT
Thank you Raghu for your reply.

   The problem seems to be the same as you mentioned.
   bin/start-dfs.sh is not starting datanodes at the specified nodes. But
ssh to the nodes is working fine. I formatted the NameNode and that might
have created the problem.
   Is there any way to solve this issue?

Chaitanya.


On Tue, May 20, 2008 at 11:33 PM, Raghu Angadi <rangadi@yahoo-inc.com>
wrote:

>
> This is mostly not related to ip address change. DataNodes and NameNode
> have different IDs. This can happen if you formatted NameNode for e.g. Or
> datanodes could be contacting wrong NameNode...
>
> start-dfs.sh might fail because of ssh. Make sure you can ssh to a specific
> datanode from the node where you invoke start-dfs.sh
>
> Raghu.
>
>
> chaitanya krishna wrote:
>
>> Hi,
>>
>>  I had a cluster of nodes with a specific set of ips assigned to them and
>> were working fine. But when the ips were changed, there are no datanodes
>> being generated, although the tasktrackers are generated well.
>>
>> when tried to manually create datanode at a specific node using "
>> bin/hadoop
>> datanode " , the following error occured:
>>
>> 08/05/20 18:30:34 INFO dfs.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = 172.16.45.162/172.16.45.162
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.16.1-dev
>> STARTUP_MSG:   build =  -r ; compiled by 'nutch' on Thu Apr 24 16:05:04
>> IST
>> 2008
>> ************************************************************/
>> 08/05/20 18:30:35 ERROR dfs.DataNode: java.io.IOException: Incompatible
>> namespaceIDs in /home/nutch/expr/hdfs/data: namenode namespaceID =
>> 1397610046; datanode namespaceID = 365757
>>        at
>> org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:298)
>>        at
>>
>> org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:142)
>>        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:236)
>>        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:162)
>>        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2510)
>>        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:2454)
>>        at
>> org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2475)
>>        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:2671)
>>
>> 08/05/20 18:30:35 INFO dfs.DataNode: SHUTDOWN_MSG:
>> /************************************************************
>> SHUTDOWN_MSG: Shutting down DataNode at 172.16.45.162/172.16.45.162
>> ************************************************************/
>>
>> ( /home/nutch/expr/hdfs/data is the directory specified for dfs.data.dir
>>  in
>> conf/hadoop-site.xml )
>>
>> Can there be some way to overcome this error without losing the data?
>>
>>
>> thank you.
>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message