Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A24E0D1EA for ; Wed, 13 Mar 2013 15:48:42 +0000 (UTC) Received: (qmail 85778 invoked by uid 500); 13 Mar 2013 15:48:37 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 85618 invoked by uid 500); 13 Mar 2013 15:48:37 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 85610 invoked by uid 99); 13 Mar 2013 15:48:37 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 15:48:37 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dontariq@gmail.com designates 209.85.128.172 as permitted sender) Received: from [209.85.128.172] (HELO mail-ve0-f172.google.com) (209.85.128.172) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 13 Mar 2013 15:48:32 +0000 Received: by mail-ve0-f172.google.com with SMTP id cz11so857592veb.31 for ; Wed, 13 Mar 2013 08:48:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=ySVBzqRQpm407yZHYBfiL59n7lI9fjz7E7ZYf944Uxk=; b=qZ7C/LPnea18CkN1aKqSeV6i4qu7Di1NUcCmZ/vbHIn163ljDTezlokfYxHhA0AinB VEYOp6WoJgk+pIcKdB5gW//sz2c7bBpSh4Fg4yeARlimkvJhRPKuFx9EaepfJdofAG3A MQB0e5A98atRvb0czMXlcFCbPY6mKJgzBnrp63Jr+j9Pm+RJqBl5ezL2VlwZC0slLcFL a9M3cMznAuaV0gDB8fbY3SCJr0U1WsWZpI5K1vElWNj+CfNg2oK278xvDfGn/GpdEFJM G6VCtpAIa6/3QSZvVpq02Bvx3jSszHof4zl7lmZGagkiGID83/pErw089C8BHpAhaiDR cEbg== X-Received: by 10.52.176.138 with SMTP id ci10mr7327076vdc.35.1363189692139; Wed, 13 Mar 2013 08:48:12 -0700 (PDT) MIME-Version: 1.0 Received: by 10.59.13.9 with HTTP; Wed, 13 Mar 2013 08:47:32 -0700 (PDT) In-Reply-To: References: From: Mohammad Tariq Date: Wed, 13 Mar 2013 21:17:32 +0530 Message-ID: Subject: Re: Second node hdfs To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=bcaec517a8be5cc13604d7d055f0 X-Virus-Checked: Checked by ClamAV on apache.org --bcaec517a8be5cc13604d7d055f0 Content-Type: text/plain; charset=ISO-8859-1 No, you don't have to format the NN everytime you add a DN. Looking at your case, it seems the second DN was part of some other cluster and has the ID of that NN. Warm Regards, Tariq https://mtariq.jux.com/ cloudfront.blogspot.com On Wed, Mar 13, 2013 at 9:13 PM, Cyril Bogus wrote: > Thank you both. > > So what both of you were saying which will be true is that is order to > start and synchronize the cluster, I will have to format both nodes at the > same time ok. > > I was working on the master node without the second node and did not > format before trying to start the second one. > > I reformatted the cluster with both nodes connected and it worked. But I > have a question. > > If I want to add a third node and my current cluster is populated with > some tables, will I have to format it again in order to add the node? > > > On Wed, Mar 13, 2013 at 10:34 AM, Mohammad Tariq wrote: > >> Hello Cyril, >> >> This is because your datanode has a different namespaceID from the >> one which master(namenode) actually has. Have you formatted the HDFS >> recently? Were you able to format it properly? Everytime you format HDFS, >> NameNode generates new namespaceID, which should be same in both NameNodes >> and DataNodes otherwise it won't be able to reach NameNode. >> >> Warm Regards, >> Tariq >> https://mtariq.jux.com/ >> cloudfront.blogspot.com >> >> >> On Wed, Mar 13, 2013 at 7:57 PM, Cyril Bogus wrote: >> >>> I am trying to start the datanode on the slave node but when I check the >>> dfs I only have one node. >>> >>> When I check the logs on the slave node I find the following output. >>> >>> 2013-03-13 10:22:14,608 INFO >>> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: >>> /************************************************************ >>> STARTUP_MSG: Starting DataNode >>> STARTUP_MSG: host = Owner-5/127.0.1.1 >>> STARTUP_MSG: args = [] >>> STARTUP_MSG: version = 1.0.4 >>> STARTUP_MSG: build = >>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r >>> 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 >>> ************************************************************/ >>> 2013-03-13 10:22:15,086 INFO >>> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >>> hadoop-metrics2.properties >>> 2013-03-13 10:22:15,121 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source >>> MetricsSystem,sub=Stats registered. >>> 2013-03-13 10:22:15,123 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >>> period at 10 second(s). >>> 2013-03-13 10:22:15,123 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system >>> started >>> 2013-03-13 10:22:15,662 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi >>> registered. >>> 2013-03-13 10:22:15,686 WARN >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already >>> exists! >>> 2013-03-13 10:22:19,730 ERROR >>> org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: >>> Incompatible namespaceIDs in /home/hadoop/hdfs/data: namenode namespaceID = >>> 1683708441; datanode namespaceID = 606666501 >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:299) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) >>> at >>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) >>> >>> 2013-03-13 10:22:19,731 INFO >>> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: >>> /************************************************************ >>> SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1 >>> ************************************************************/ >>> >>> Thank you for any insights. >>> >>> Cyril >>> >> >> > --bcaec517a8be5cc13604d7d055f0 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
No, you don't have to format the NN everytime you add = a DN. Looking at your case, it seems the second DN was part of some other c= luster and has the ID of that NN.


On Wed, Mar 13, 2013 at 9:13 PM, Cyril B= ogus <cyrilbogus@gmail.com> wrote:
Thank you both.

So what both of you = were saying which will be true is that is order to start and synchronize th= e cluster, I will have to format both nodes at the same time ok.

I was working on the master node without the second node and did not format= before trying to start the second one.

I reformatted the clus= ter with both nodes connected and it worked. But I have a question.

If I want to add a third node and my current cluster is populated= with some tables, will I have to format it again in order to add the node?=

On Wed, Mar 13, 2013 at 10:34 AM, Mohammad= Tariq <dontariq@gmail.com> wrote:
Hello Cyril,

=
=A0 =A0 =A0 This is because your datanode has a different namespaceID = from the one which master(namenode) actually has. Have you formatted the HD= FS recently? Were you able to format it properly? Everytime you format HDFS= , NameNode generates new namespaceID, which should be same in both NameNode= s and DataNodes otherwise it won't be able to reach NameNode.



On Wed, Mar 13, 2013 at 7:57 PM, Cy= ril Bogus <cyrilbogus@gmail.com> wrote:
I am trying to start the datanode on the sl= ave node but when I check the dfs I only have one node.

When I= check the logs on the slave node I find the following output.

2013-= 03-13 10:22:14,608 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: ST= ARTUP_MSG:
/************************************************************
STARTUP_MS= G: Starting DataNode
STARTUP_MSG:=A0=A0 host =3D Owner-5/127.0.1.1
STARTUP_MSG:=A0=A0 args = =3D []
STARTUP_MSG:=A0=A0 version =3D 1.0.4
STARTUP_MSG:=A0=A0 build =3D https://svn.apache.org/r= epos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by '= ;hortonfo' on Wed Oct=A0 3 05:13:58 UTC 2012
************************************************************/
2013-03-13= 10:22:15,086 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded pr= operties from hadoop-metrics2.properties
2013-03-13 10:22:15,121 INFO or= g.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source Metric= sSystem,sub=3DStats registered.
2013-03-13 10:22:15,123 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Scheduled snapshot period at 10 second(s).
2013-03-13 10:22:15,123 = INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics sy= stem started
2013-03-13 10:22:15,662 INFO org.apache.hadoop.metrics2.impl.MetricsSourceA= dapter: MBean for source ugi registered.
2013-03-13 10:22:15,686 WARN or= g.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already ex= ists!
2013-03-13 10:22:19,730 ERROR org.apache.hadoop.hdfs.server.datanode.DataNo= de: java.io.IOException: Incompatible namespaceIDs in /home/hadoop/hdfs/dat= a: namenode namespaceID =3D 1683708441; datanode namespaceID =3D 606666501<= br> =A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransitio= n(DataStorage.java:232)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datan= ode.DataStorage.recoverTransitionRead(DataStorage.java:147)
=A0=A0=A0 at= org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.jav= a:385)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(D= ataNode.java:299)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.Da= taNode.makeInstance(DataNode.java:1582)
=A0=A0=A0 at org.apache.hadoop.h= dfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode= (DataNode.java:1539)
=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode= .DataNode.secureMain(DataNode.java:1665)
=A0=A0=A0 at org.apache.hadoop.= hdfs.server.datanode.DataNode.main(DataNode.java:1682)

2013-03-13 10:22:19,731 INFO org.apache.hadoop.hdfs.server.datanode.Dat= aNode: SHUTDOWN_MSG:
/*************************************************= ***********
SHUTDOWN_MSG: Shutting down DataNode at Owner-5/127.0.1.1
************************************************************/

= Thank you for any insights.

Cyril



--bcaec517a8be5cc13604d7d055f0--