Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5E4A5D1A1 for ; Wed, 13 Mar 2013 15:44:22 +0000 (UTC) Received: (qmail 57106 invoked by uid 500); 13 Mar 2013 15:44:15 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 56950 invoked by uid 500); 13 Mar 2013 15:44:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 56827 invoked by uid 99); 13 Mar 2013 15:44:15 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 15:44:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of cyrilbogus@gmail.com designates 209.85.128.175 as permitted sender) Received: from [209.85.128.175] (HELO mail-ve0-f175.google.com) (209.85.128.175) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 15:44:11 +0000 Received: by mail-ve0-f175.google.com with SMTP id cy12so860306veb.34 for ; Wed, 13 Mar 2013 08:43:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=G0HYLEISI+lpRjVaaRba5PF45wGrXs/7hg8zK2zhRiw=; b=PObj30H+PBTM89/+Pj+oQhBWjtvuiJaqLmT5vbpS7IOYMEkmimPe6gi44or8qAxT/w SXG7A83Kp7ztF9iuxTXr0w6aOS2fR+tLbbtGk8+5mIzrCJeIocmuoSWb9ybZPAnPtJRF I6JjrNdh6GV5xBTx4BgYbqytJD5hWPNsUui6AI15NfAOoRoU3+63SkiD8uuYLp30aUQ6 ZuiWxva5w3yUoBFxkVuXkKytWQ5tsfl9gnlIcQkuuhf1rhLjr0BIo6Q7whF1GxafFjes +vCf4DrH+4l3CjygrTd2ovytappuPZ3v1SNFMleIirFIJDcyNSQByFBfybpfq3Pdonwx UkTA== MIME-Version: 1.0 X-Received: by 10.58.96.40 with SMTP id dp8mr8573594veb.41.1363189430545; Wed, 13 Mar 2013 08:43:50 -0700 (PDT) Received: by 10.58.246.42 with HTTP; Wed, 13 Mar 2013 08:43:50 -0700 (PDT) In-Reply-To: References: Date: Wed, 13 Mar 2013 11:43:50 -0400 Message-ID: Subject: Re: Second node hdfs From: Cyril Bogus To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=089e01229c8ac546d904d7d0457d X-Virus-Checked: Checked by ClamAV on apache.org --089e01229c8ac546d904d7d0457d Content-Type: text/plain; charset=ISO-8859-1 Thank you both. So what both of you were saying which will be true is that is order to start and synchronize the cluster, I will have to format both nodes at the same time ok. I was working on the master node without the second node and did not format before trying to start the second one. I reformatted the cluster with both nodes connected and it worked. But I have a question. If I want to add a third node and my current cluster is populated with some tables, will I have to format it again in order to add the node? On Wed, Mar 13, 2013 at 10:34 AM, Mohammad Tariq wrote: > Hello Cyril, > > This is because your datanode has a different namespaceID from the > one which master(namenode) actually has. Have you formatted the HDFS > recently? Were you able to format it properly? Everytime you format HDFS, > NameNode generates new namespaceID, which should be same in both NameNodes > and DataNodes otherwise it won't be able to reach NameNode. > > Warm Regards, > Tariq > https://mtariq.jux.com/ > cloudfront.blogspot.com > > > On Wed, Mar 13, 2013 at 7:57 PM, Cyril Bogus wrote: > >> I am trying to start the datanode on the slave node but when I check the >> dfs I only have one node. >> >> When I check the logs on the slave node I find the following output. >> >> 2013-03-13 10:22:14,608 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: >> /************************************************************ >> STARTUP_MSG: Starting DataNode >> STARTUP_MSG: host = Owner-5/127.0.1.1 >> STARTUP_MSG: args = [] >> STARTUP_MSG: version = 1.0.4 >> STARTUP_MSG: build = >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r >> 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 >> ************************************************************/ >> 2013-03-13 10:22:15,086 INFO >> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >> hadoop-metrics2.properties >> 2013-03-13 10:22:15,121 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >> MetricsSystem,sub=Stats registered. >> 2013-03-13 10:22:15,123 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >> period at 10 second(s). >> 2013-03-13 10:22:15,123 INFO >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system >> started >> 2013-03-13 10:22:15,662 INFO >> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi >> registered. >> 2013-03-13 10:22:15,686 WARN >> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already >> exists! >> 2013-03-13 10:22:19,730 ERROR >> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: >> Incompatible namespaceIDs in /home/hadoop/hdfs/data: namenode namespaceID = >> 1683708441; datanode namespaceID = 606666501 >> at >> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) >> at >> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:299) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) >> at >> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) >> >> 2013-03-13 10:22:19,731 INFO >> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: >> /************************************************************ >> SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1 >> ************************************************************/ >> >> Thank you for any insights. >> >> Cyril >> > > --089e01229c8ac546d904d7d0457d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thank you both.

So what both of you = were saying which will be true is that is order to start and synchronize th= e cluster, I will have to format both nodes at the same time ok.

I was working on the master node without the second node and did not format= before trying to start the second one.

I reformatted the clus= ter with both nodes connected and it worked. But I have a question.

If I want to add a third node and my current cluster is populated= with some tables, will I have to format it again in order to add the node?=


On Wed,= Mar 13, 2013 at 10:34 AM, Mohammad Tariq <dontariq@gmail.com> wrote:
Hello Cyril,

=
=A0 =A0 =A0 This is because your datanode has a different namespaceID = from the one which master(namenode) actually has. Have you formatted the HD= FS recently? Were you able to format it properly? Everytime you format HDFS= , NameNode generates new namespaceID, which should be same in both NameNode= s and DataNodes otherwise it won't be able to reach NameNode.



On Wed, Mar 13, 2013 a= t 7:57 PM, Cyril Bogus <cyrilbogus@gmail.com> wrote:
<= /div>
I am trying to start the datanode on the sl= ave node but when I check the dfs I only have one node.

When I= check the logs on the slave node I find the following output.

2013-= 03-13 10:22:14,608 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: ST= ARTUP_MSG:
/************************************************************
STARTUP_MS= G: Starting DataNode
STARTUP_MSG:=A0=A0 host =3D Owner-5/127.0.1.1
STARTUP_MSG:=A0=A0 args = =3D []
STARTUP_MSG:=A0=A0 version =3D 1.0.4
STARTUP_MSG:=A0=A0 build =3D https://svn.apache.org/r= epos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by '= ;hortonfo' on Wed Oct=A0 3 05:13:58 UTC 2012
************************************************************/
2013-03-13= 10:22:15,086 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded pr= operties from hadoop-metrics2.properties
2013-03-13 10:22:15,121 INFO or= g.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source Metric= sSystem,sub=3DStats registered.
2013-03-13 10:22:15,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Scheduled snapshot period at 10 second(s).
2013-03-13 10:22:15,123 = INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics sy= stem started
2013-03-13 10:22:15,662 INFO org.apache.hadoop.metrics2.impl.MetricsSourceA= dapter: MBean for source ugi registered.
2013-03-13 10:22:15,686 WARN or= g.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already ex= ists!
2013-03-13 10:22:19,730 ERROR org.apache.hadoop.hdfs.server.datanode.DataNo= de: java.io.IOException: Incompatible namespaceIDs in /home/hadoop/hdfs/dat= a: namenode namespaceID =3D 1683708441; datanode namespaceID =3D 606666501<= br> =A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransitio= n(DataStorage.java:232)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datan= ode.DataStorage.recoverTransitionRead(DataStorage.java:147)
=A0=A0=A0 at= org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.jav= a:385)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(D= ataNode.java:299)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.Da= taNode.makeInstance(DataNode.java:1582)
=A0=A0=A0 at org.apache.hadoop.h= dfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode= (DataNode.java:1539)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode= .DataNode.secureMain(DataNode.java:1665)
=A0=A0=A0 at org.apache.hadoop.= hdfs.server.datanode.DataNode.main(DataNode.java:1682)

2013-03-13 10:22:19,731 INFO org.apache.hadoop.hdfs.server.datanode.Dat= aNode: SHUTDOWN_MSG:
/*************************************************= ***********
SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1
************************************************************/

= Thank you for any insights.

Cyril


--089e01229c8ac546d904d7d0457d--