Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2AEF1EAAE for ; Wed, 13 Mar 2013 14:34:25 +0000 (UTC) Received: (qmail 36194 invoked by uid 500); 13 Mar 2013 14:34:19 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 35856 invoked by uid 500); 13 Mar 2013 14:34:19 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 35830 invoked by uid 99); 13 Mar 2013 14:34:18 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 14:34:18 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of nitinpawar432@gmail.com designates 209.85.215.50 as permitted sender) Received: from [209.85.215.50] (HELO mail-la0-f50.google.com) (209.85.215.50) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 14:34:14 +0000 Received: by mail-la0-f50.google.com with SMTP id ec20so1173857lab.23 for ; Wed, 13 Mar 2013 07:33:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=4GCV9vrvdbuNoIlSPc6RfdERHD5DBk5N9C7luWQMUcE=; b=gy/w0CrnObB7Swu9BbtU5kBA2fR0Mx3XFfl38yz8gaNUQcZC/MLMG9XhjjXGZLgtfg 3j9eMflqpuuzQOB1qrs5FwkCBOpXKmADw26hHMVipUtV3XZiB4fGPMvZ8VtUx+MJRPA5 bBkExVYaIjqHYEm5q8rVlxg6cDs8VaYYY1kbBBVGdv2f+4WvoSrdYc+n2t1o6WurabYZ 0PuPjRFLL35/0SV/JhZ2V0kavHgVtVsXtusZwNQmkAg/OKS7JSEnkIVzRqHvRi0n7Y4+ UJbhprJGjeBnpMNEOmBveNU+n5eew3oZjommyDlveR2kwi6xVs4dWRUllnKyPRitMOJg L2pg== MIME-Version: 1.0 X-Received: by 10.112.147.67 with SMTP id ti3mr813909lbb.124.1363185232768; Wed, 13 Mar 2013 07:33:52 -0700 (PDT) Received: by 10.114.15.8 with HTTP; Wed, 13 Mar 2013 07:33:52 -0700 (PDT) In-Reply-To: References: Date: Wed, 13 Mar 2013 20:03:52 +0530 Message-ID: Subject: Re: Second node hdfs From: Nitin Pawar To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b3441c48ff7c204d7cf4bd9 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b3441c48ff7c204d7cf4bd9 Content-Type: text/plain; charset=ISO-8859-1 was this namenode part of any other hadoop cluster? did you format your namenode and forgot to clean up the datanode? you can refer Michael's blog for more details and solutions http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#javaioioexception-incompatible-namespaceids On Wed, Mar 13, 2013 at 7:57 PM, Cyril Bogus wrote: > I am trying to start the datanode on the slave node but when I check the > dfs I only have one node. > > When I check the logs on the slave node I find the following output. > > 2013-03-13 10:22:14,608 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting DataNode > STARTUP_MSG: host = Owner-5/127.0.1.1 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 1.0.4 > STARTUP_MSG: build = > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r > 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 > ************************************************************/ > 2013-03-13 10:22:15,086 INFO > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from > hadoop-metrics2.properties > 2013-03-13 10:22:15,121 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source > MetricsSystem,sub=Stats registered. > 2013-03-13 10:22:15,123 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot > period at 10 second(s). > 2013-03-13 10:22:15,123 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system > started > 2013-03-13 10:22:15,662 INFO > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi > registered. > 2013-03-13 10:22:15,686 WARN > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already > exists! > 2013-03-13 10:22:19,730 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: > Incompatible namespaceIDs in /home/hadoop/hdfs/data: namenode namespaceID = > 1683708441; datanode namespaceID = 606666501 > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:299) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) > > 2013-03-13 10:22:19,731 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1 > ************************************************************/ > > Thank you for any insights. > > Cyril > -- Nitin Pawar --047d7b3441c48ff7c204d7cf4bd9 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
was this namenode part of any other hadoop cluster?=A0did you format your namenode and forgot to clean up the datanode?=A0
=

you can refer Michael's blog for more details= and solutions=A0


O= n Wed, Mar 13, 2013 at 7:57 PM, Cyril Bogus <cyrilbogus@gmail.com= > wrote:
I am trying = to start the datanode on the slave node but when I check the dfs I only hav= e one node.

When I check the logs on the slave node I find the following outp= ut.

2013-03-13 10:22:14,608 INFO org.apache.hadoop.hdfs.server.datan= ode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MS= G: Starting DataNode
STARTUP_MSG:=A0=A0 host =3D Owner-5/127.0.1.1
STARTUP_MSG:=A0=A0 args = =3D []
STARTUP_MSG:=A0=A0 version =3D 1.0.4
STARTUP_MSG:=A0=A0 build =3D https://svn.apache.org/r= epos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by '= ;hortonfo' on Wed Oct=A0 3 05:13:58 UTC 2012
************************************************************/
2013-03-13= 10:22:15,086 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded pr= operties from hadoop-metrics2.properties
2013-03-13 10:22:15,121 INFO or= g.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source Metric= sSystem,sub=3DStats registered.
2013-03-13 10:22:15,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Scheduled snapshot period at 10 second(s).
2013-03-13 10:22:15,123 = INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics sy= stem started
2013-03-13 10:22:15,662 INFO org.apache.hadoop.metrics2.impl.MetricsSourceA= dapter: MBean for source ugi registered.
2013-03-13 10:22:15,686 WARN or= g.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already ex= ists!
2013-03-13 10:22:19,730 ERROR org.apache.hadoop.hdfs.server.datanode.DataNo= de: java.io.IOException: Incompatible namespaceIDs in /home/hadoop/hdfs/dat= a: namenode namespaceID =3D 1683708441; datanode namespaceID =3D 606666501<= br> =A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransitio= n(DataStorage.java:232)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datan= ode.DataStorage.recoverTransitionRead(DataStorage.java:147)
=A0=A0=A0 at= org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.jav= a:385)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(D= ataNode.java:299)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.Da= taNode.makeInstance(DataNode.java:1582)
=A0=A0=A0 at org.apache.hadoop.h= dfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode= (DataNode.java:1539)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode= .DataNode.secureMain(DataNode.java:1665)
=A0=A0=A0 at org.apache.hadoop.= hdfs.server.datanode.DataNode.main(DataNode.java:1682)

2013-03-13 10:22:19,731 INFO org.apache.hadoop.hdfs.server.datanode.Dat= aNode: SHUTDOWN_MSG:
/*************************************************= ***********
SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1
************************************************************/

= Thank you for any insights.<= br>
C= yril



--
Nitin Pawar<= br>
--047d7b3441c48ff7c204d7cf4bd9--