Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5E09BD1C7 for ; Wed, 13 Mar 2013 15:46:57 +0000 (UTC) Received: (qmail 68704 invoked by uid 500); 13 Mar 2013 15:46:51 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 68592 invoked by uid 500); 13 Mar 2013 15:46:51 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 68584 invoked by uid 99); 13 Mar 2013 15:46:51 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 15:46:51 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of cyrilbogus@gmail.com designates 209.85.128.170 as permitted sender) Received: from [209.85.128.170] (HELO mail-ve0-f170.google.com) (209.85.128.170) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 15:46:47 +0000 Received: by mail-ve0-f170.google.com with SMTP id 14so873818vea.29 for ; Wed, 13 Mar 2013 08:46:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=JVp9oCXPasCQEnQ/J8B3JSSe7QBPkfDej+3Unm1UJNY=; b=Td754YEKdc1BTtcIIQTBTE45A9H1ocjX9D8NS7Mas6s+dgipYmh305ugSwwmTZmYCK mbpP1kWZXi0GtQkT4CIQH5+34DsYns+yCjBrmyab8I0EF0d9LWzDqSl9TUPBlXdSxFvc zoRp+n3Yodd6A8h6U7YJr3kN68BcYwSCC+MHXFJjkDIxnnt8flfe6nSroPdV+qqjRRnh wOs1q63MAgvpXCQp60Va2hFRO+/aJDweEJv/yXD+CaTnPvhET3XA9rHrGLG42RH0eQrr MVJBBw+YeTXuZFkwOOFYhFRtOrSqRGmC3mSsFS6TpFta2X4jBVl7YZDuEPcWZBbKY5W4 BT+Q== MIME-Version: 1.0 X-Received: by 10.58.96.40 with SMTP id dp8mr8578408veb.41.1363189586838; Wed, 13 Mar 2013 08:46:26 -0700 (PDT) Received: by 10.58.246.42 with HTTP; Wed, 13 Mar 2013 08:46:26 -0700 (PDT) In-Reply-To: References: Date: Wed, 13 Mar 2013 11:46:26 -0400 Message-ID: Subject: Re: Second node hdfs From: Cyril Bogus To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e01229c8a15c9eb04d7d04f8a X-Virus-Checked: Checked by ClamAV on apache.org --089e01229c8a15c9eb04d7d04f8a Content-Type: text/plain; charset=ISO-8859-1 nvm. Thanks Nitin for the tutorial mentioning how to do this without formatting the hdfs. On Wed, Mar 13, 2013 at 11:43 AM, Cyril Bogus wrote: > Thank you both. > > So what both of you were saying which will be true is that is order to > start and synchronize the cluster, I will have to format both nodes at the > same time ok. > > I was working on the master node without the second node and did not > format before trying to start the second one. > > I reformatted the cluster with both nodes connected and it worked. But I > have a question. > > If I want to add a third node and my current cluster is populated with > some tables, will I have to format it again in order to add the node? > > > On Wed, Mar 13, 2013 at 10:34 AM, Mohammad Tariq wrote: > >> Hello Cyril, >> >> This is because your datanode has a different namespaceID from the >> one which master(namenode) actually has. Have you formatted the HDFS >> recently? Were you able to format it properly? Everytime you format HDFS, >> NameNode generates new namespaceID, which should be same in both NameNodes >> and DataNodes otherwise it won't be able to reach NameNode. >> >> Warm Regards, >> Tariq >> https://mtariq.jux.com/ >> cloudfront.blogspot.com >> >> >> On Wed, Mar 13, 2013 at 7:57 PM, Cyril Bogus wrote: >> >>> I am trying to start the datanode on the slave node but when I check the >>> dfs I only have one node. >>> >>> When I check the logs on the slave node I find the following output. >>> >>> 2013-03-13 10:22:14,608 INFO >>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: >>> /************************************************************ >>> STARTUP_MSG: Starting DataNode >>> STARTUP_MSG: host = Owner-5/127.0.1.1 >>> STARTUP_MSG: args = [] >>> STARTUP_MSG: version = 1.0.4 >>> STARTUP_MSG: build = >>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r >>> 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 >>> ************************************************************/ >>> 2013-03-13 10:22:15,086 INFO >>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >>> hadoop-metrics2.properties >>> 2013-03-13 10:22:15,121 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >>> MetricsSystem,sub=Stats registered. >>> 2013-03-13 10:22:15,123 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >>> period at 10 second(s). >>> 2013-03-13 10:22:15,123 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system >>> started >>> 2013-03-13 10:22:15,662 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi >>> registered. >>> 2013-03-13 10:22:15,686 WARN >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already >>> exists! >>> 2013-03-13 10:22:19,730 ERROR >>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: >>> Incompatible namespaceIDs in /home/hadoop/hdfs/data: namenode namespaceID = >>> 1683708441; datanode namespaceID = 606666501 >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:299) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) >>> >>> 2013-03-13 10:22:19,731 INFO >>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: >>> /************************************************************ >>> SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1 >>> ************************************************************/ >>> >>> Thank you for any insights. >>> >>> Cyril >>> >> >> > --089e01229c8a15c9eb04d7d04f8a Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
nvm. Thanks Nitin for the tutorial mentioning how to do th= is without formatting the hdfs.

On Wed, Mar 13, 2013 at 11:43 AM, Cyril Bogus <= span dir=3D"ltr"><cyrilbogus@gmail.com> wrote:
Thank you bo= th.

So what both of you were saying which will be true is that is or= der to start and synchronize the cluster, I will have to format both nodes = at the same time ok.

I was working on the master node without the second node and did not format= before trying to start the second one.

I reformatted the clus= ter with both nodes connected and it worked. But I have a question.

If I want to add a third node and my current cluster is populated= with some tables, will I have to format it again in order to add the node?=

On Wed, Mar 13, 2013 at 10:34 AM, Mohammad= Tariq <dontariq@gmail.com> wrote:
Hello Cyril,

=
=A0 =A0 =A0 This is because your datanode has a different namespaceID = from the one which master(namenode) actually has. Have you formatted the HD= FS recently? Were you able to format it properly? Everytime you format HDFS= , NameNode generates new namespaceID, which should be same in both NameNode= s and DataNodes otherwise it won't be able to reach NameNode.



On Wed, Mar 13, 2013 at 7:57 PM, Cy= ril Bogus <cyrilbogus@gmail.com> wrote:
I am trying to start the datanode on the sl= ave node but when I check the dfs I only have one node.

When I= check the logs on the slave node I find the following output.

2013-= 03-13 10:22:14,608 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: ST= ARTUP_MSG:
/************************************************************
STARTUP_MS= G: Starting DataNode
STARTUP_MSG:=A0=A0 host =3D Owner-5/127.0.1.1
STARTUP_MSG:=A0=A0 args = =3D []
STARTUP_MSG:=A0=A0 version =3D 1.0.4
STARTUP_MSG:=A0=A0 build =3D https://svn.apache.org/r= epos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by '= ;hortonfo' on Wed Oct=A0 3 05:13:58 UTC 2012
************************************************************/
2013-03-13= 10:22:15,086 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded pr= operties from hadoop-metrics2.properties
2013-03-13 10:22:15,121 INFO or= g.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source Metric= sSystem,sub=3DStats registered.
2013-03-13 10:22:15,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Scheduled snapshot period at 10 second(s).
2013-03-13 10:22:15,123 = INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics sy= stem started
2013-03-13 10:22:15,662 INFO org.apache.hadoop.metrics2.impl.MetricsSourceA= dapter: MBean for source ugi registered.
2013-03-13 10:22:15,686 WARN or= g.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already ex= ists!
2013-03-13 10:22:19,730 ERROR org.apache.hadoop.hdfs.server.datanode.DataNo= de: java.io.IOException: Incompatible namespaceIDs in /home/hadoop/hdfs/dat= a: namenode namespaceID =3D 1683708441; datanode namespaceID =3D 606666501<= br> =A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransitio= n(DataStorage.java:232)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datan= ode.DataStorage.recoverTransitionRead(DataStorage.java:147)
=A0=A0=A0 at= org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.jav= a:385)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(D= ataNode.java:299)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.Da= taNode.makeInstance(DataNode.java:1582)
=A0=A0=A0 at org.apache.hadoop.h= dfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode= (DataNode.java:1539)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode= .DataNode.secureMain(DataNode.java:1665)
=A0=A0=A0 at org.apache.hadoop.= hdfs.server.datanode.DataNode.main(DataNode.java:1682)

2013-03-13 10:22:19,731 INFO org.apache.hadoop.hdfs.server.datanode.Dat= aNode: SHUTDOWN_MSG:
/*************************************************= ***********
SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1
************************************************************/

= Thank you for any insights.

Cyril



--089e01229c8a15c9eb04d7d04f8a--