hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Tariq <donta...@gmail.com>
Subject Re: Incompartible cluserIDS
Date Mon, 29 Apr 2013 21:30:43 GMT
Hello Kevin,

          Have you reformatted the NN(unsuccessfully)?Was your NN serving
some other cluster earlier or your DNs were part of some other
cluster?Datanodes bind themselves to namenode through namespaceID and in
your case the IDs of DNs and NN seem to be different. As a workaround you
could do this :

1- Stop all the daemons.
2- Go to the directory which you have specified as the value of
"dfs.name.dir" property in your hdfs-site.xml file.
3- You'll find a directory called "current" inside this directory where a
file named "VERSION" will be present. Open this file and copy the value of
"namespaceID" form here.
4- Now go to the directory which you have specified as the value of
"dfs.data.dir" property in your hdfs-site.xml file.
5- Move inside the "current" directory and open the "VERSION" file here as
well. Now replace the value of "namespaceID" present here with the one you
had copied earlier.
6- Restart all the daemons.

Note : If you have not created dfs.name.dir and dfs.data.dir separately,
you could find all this inside your temp directory.


Warm Regards,

On Tue, Apr 30, 2013 at 2:45 AM, <rkevinburton@charter.net> wrote:

> I am trying to start up a cluster and in the datanode log on the NameNode
> server I get the error:
> 2013-04-29 15:50:20,988 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /data/hadoop/dfs/data/in_use.lock acquired by nodename
> 1406@devUbuntu05
> 2013-04-29 15:50:20,990 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-1306349046- (storage id
> DS-403514403- service to devUbuntu05/
> java.io.IOException: *Incompatible clusterIDs* in /data/hadoop/dfs/data:
> namenode clusterID = CID-23b9f9c7-2c25-411f-8bd2-4d5c9d7c25a1; datanode
> clusterID = CID-e3f6b811-c1b4-4778-a31e-14dea8b2cca8
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
>         at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
> How do I get around this error? What does the error mean?
> Thank you.
> Kevin

View raw message