hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Samuel Guo" <guosi...@gmail.com>
Subject Re: DataNode Problem
Date Mon, 20 Oct 2008 01:07:20 GMT
It seems that you have different version of Hadoop started in NameNode &
DataNode.
try to use bin/start-all -upgrade to make your storage match the version of
your current Hadoop.

On Wed, Oct 15, 2008 at 7:12 PM, ZhiHong Fu <ddream84@gmail.com> wrote:

> This is the content of node4 log files:
>
> 2008-10-15 16:18:59,406 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at node4.cluster1domain/192.168.1.5
> ************************************************************/
> 2008-10-15 16:30:08,522 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = node4.cluster1domain/192.168.1.5
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.18.0
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r
> 686010;
> compiled by 'hadoopqa' on Thu Aug 14 19:48:33 UTC 2008
> ************************************************************/
> 2008-10-15 16:30:10,400 ERROR org.apache.hadoop.dfs.DataNode:
> java.io.IOException: Incompatible namespaceIDs in
> /home/hadoop/tools/hadoop-0.18.0/data: namenode namespaceID = 1737994036;
> datanode namespaceID = 1007039793
>        at
> org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:226)
>        at
>
> org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:141)
>        at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:273)
>        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
>        at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
>        at
> org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
>        at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
>        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
>
> 2008/10/15 Samuel Guo <guosijie@gmail.com>
>
> > please check the logs of the nodes that didn't come up.
> >
> > On Wed, Oct 15, 2008 at 6:46 PM, ZhiHong Fu <ddream84@gmail.com> wrote:
> >
> > > Yes ,Thanks,  I have tried as you suggested. and I leave the safemode ,
> > but
> > > when i run bin/hadoop dfsadmin -report, There still is no datanode
> > > available.
> > >
> > > 2008/10/15 Prasad Pingali <pvvpr@research.iiit.ac.in>
> > >
> > > > hello,
> > > >   The report shows your dfs is not yet started. Sometimes it may take
> a
> > > > minute or two to start dfs on a small cluster. Did you wait for
> > sometime
> > > > for
> > > > dfs to start and leave safe mode?
> > > >
> > > > - Prasad.
> > > >
> > > > On Wednesday 15 October 2008 01:57:44 pm ZhiHong Fu wrote:
> > > > > Hello:
> > > > >
> > > > >      I have installed hadoop on a cluster which hava 7 nodes, one
> is
> > > > > namenode and the other 6 nodes are datanode . and At that time It
> > runs
> > > > > normally, and also I runned the wordcount example, It's good.
> > > > >
> > > > >     but today I want to run a mapred application , It reports
> error.
> > > and
> > > > I
> > > > > found some datanode down, But I didn't modify anything ,It's werid
> ,I
> > > can
> > > > > ssh to all the datanodes.
> > > > >
> > > > >     The error is as follows:
> > > > > hadoop@cluster1 hadoop-0.18.0]$ bin/start-dfs.sh
> > > > > starting namenode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-namenode-cluster
> > > > >1.cluster1domain.out node3: starting datanode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-datanode-node3.c
> > > > >luster1domain.out node1: starting datanode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-datanode-node1.c
> > > > >luster1domain.out node2: starting datanode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-datanode-node2.c
> > > > >luster1domain.out node4: starting datanode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-datanode-node4.c
> > > > >luster1domain.out node5: starting datanode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-datanode-node5.c
> > > > >luster1domain.out node6: starting datanode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-datanode-node6.c
> > > > >luster1domain.out cluster1: starting secondarynamenode, logging to
> > > > >
> > > >
> > >
> >
> /home/hadoop/tools/hadoop-0.18.0/bin/../logs/hadoop-hadoop-secondarynamenod
> > > > >e-cluster1.cluster1domain.out [hadoop@cluster1 hadoop-0.18.0]$
> > > bin/hadoop
> > > > > dfsadmin -report
> > > > > Total raw bytes: 0 (0 KB)
> > > > > Remaining raw bytes: 0 (0 KB)
> > > > > Used raw bytes: 0 (0 KB)
> > > > > % used: ?%
> > > > >
> > > > > Total effective bytes: 0 (0 KB)
> > > > > Effective replication multiplier: NaN
> > > > > -------------------------------------------------
> > > > > Datanodes available: 0
> > > >
> > > >
> > > >
> > > >
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message