Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A5D76E3F5 for ; Wed, 20 Feb 2013 15:52:45 +0000 (UTC) Received: (qmail 55902 invoked by uid 500); 20 Feb 2013 15:52:40 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 55439 invoked by uid 500); 20 Feb 2013 15:52:39 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 55392 invoked by uid 99); 20 Feb 2013 15:52:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Feb 2013 15:52:38 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of nagarjuna.kanamarlapudi@gmail.com designates 209.85.223.180 as permitted sender) Received: from [209.85.223.180] (HELO mail-ie0-f180.google.com) (209.85.223.180) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Feb 2013 15:52:33 +0000 Received: by mail-ie0-f180.google.com with SMTP id bn7so9920224ieb.25 for ; Wed, 20 Feb 2013 07:52:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=7kqWDfgAHgSXXAoP7PSroMAe+w44gkz1K5N42ms3kik=; b=dswKg7wtljBKZIY3lOiE4qHdzQsWapE7wVEOubEDX/7fcoat5CFV1Y1qQCzDAOqiP2 OYhNMXrrrhHWD26dWyAifCYER3O3uT0jeVXCg2IXrlkIpzMf9ZXTyMaaYy9qaUuisokJ p3MuMtk3diUUeUhc6PwnRgbNzCN7PuIfl88pqr5tRtRPhiH5pYWZyLZisxrH3qtXXx3u g2mm8DlcyQLxpEJL/5zLUdex2wdYGpFKKGWQE3z/80Oj/J3S1dtm3xppRDZ1zSCYxXcs 8c1uMrRBDgamzE7xAPDOJCuvGVCy8WGG1EUMJgObD67DjCmr0GzsoxO8sumvn/f6MPjg 4ROQ== MIME-Version: 1.0 X-Received: by 10.50.45.230 with SMTP id q6mr10730179igm.39.1361375532648; Wed, 20 Feb 2013 07:52:12 -0800 (PST) Received: by 10.64.110.169 with HTTP; Wed, 20 Feb 2013 07:52:12 -0800 (PST) In-Reply-To: References: Date: Wed, 20 Feb 2013 21:22:12 +0530 Message-ID: Subject: Re: In Compatible clusterIDs From: nagarjuna kanamarlapudi To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=14dae934061d0787e004d629f154 X-Virus-Checked: Checked by ClamAV on apache.org --14dae934061d0787e004d629f154 Content-Type: text/plain; charset=ISO-8859-1 Hi Jean Marc, Yes, this is the cluster I am trying to create and then will scale up. As per your suggestion I deleted the folder /Users/nagarjunak/Documents/ hadoop-install/hadoop-2.0.3-alpha/tmp_20 an formatted the cluster. Now I get the following error. 2013-02-20 21:17:25,668 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-811644675-124.123.215.187-1361375214801 (storage id DS-1515823288-124.123.215.187-50010-1361375245435) service to nagarjuna/ 124.123.215.187:9000 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0, storageID=DS-1515823288-124.123.215.187-50010-1361375245435, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-723b02a7-3441-41b5-8045-2a45a9cf96b0;nsid=1805451571;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295) at org.apache.hadoop.ipc.Protob On Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari < jean-marc@spaggiari.org> wrote: > Hi Nagarjuna, > > Is it a test cluster? Do you have another cluster running close-by? > Also, is it your first try? > > It seems there is some previous data in the dfs directory which is not > in sync with the last installation. > > Maybe you can remove the content of > /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20 > if it's not usefull for you, reformat your node and restart it? > > JM > > 2013/2/20, nagarjuna kanamarlapudi : > > Hi, > > > > I am trying to setup single node cluster of hadop 2.0.* > > > > When trying to start datanode I got the following error. Could anyone > help > > me out > > > > Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id > > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/ > > 124.123.215.187:9000 > > java.io.IOException: Incompatible clusterIDs in > > > /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/dfs/data: > > namenode clusterID = CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datanode > > clusterID = CID-1740b490-8413-451c-926f-2f0676b217ec > > at > > > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:850) > > at > > > org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:821) > > at > > > org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280) > > at > > > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222) > > at > > > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664) > > at java.lang.Thread.run(Thread.java:680) > > 2013-02-20 21:03:39,856 WARN > > org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool > service > > for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id > > DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjuna/ > > 124.123.215.187:9000 > > 2013-02-20 21:03:39,958 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool > > BP-1894309265-124.123.215.187-1361374377471 (storage id > > DS-1175433225-124.123.215.187-50010-1361374235895) > > 2013-02-20 21:03:41,959 WARN > > org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode > > 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting > with > > status 0 > > 2013-02-20 21:03:41,963 INFO > > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: > > /************************************************************ > > SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187 > > ************************************************************/ > > > --14dae934061d0787e004d629f154 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

Hi Jean Marc,

Yes, this is = the cluster I am trying =A0to create and then will scale up.

As per your suggestion I deleted= the folder=A0/= Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20=A0an formatted the cluster.

<= br>
Now I get the following error.

<= br>
2013-02-20 21:17:25,668 FATAL org.apache.hadoop.hdfs.server.datanode.DataN= ode: Initialization failed for block pool Block pool BP-811644675-124.123.2= 15.187-1361375214801 (storage id DS-1515823288-124.123.215.187-50010-136137= 5245435) service to nagarjuna/124.1= 23.215.187:9000
org.apache.hadoop.ipc.RemoteException(org.apache= .hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied = communication with namenode: DatanodeRegistration(0.0.0.0, storageID=3DDS-1= 515823288-124.123.215.187-50010-1361375245435, infoPort=3D50075, ipcPort=3D= 50020, storageInfo=3Dlv=3D-40;cid=3DCID-723b02a7-3441-41b5-8045-2a45a9cf96b= 0;nsid=3D1805451571;c=3D0)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server= .blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:629)=
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.= server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.server= .namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.pro= tocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodePro= tocolServerSideTranslatorPB.java:90)
=A0 =A0 =A0 =A0 at org.apache.hadoop.hdfs.protoc= ol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMetho= d(DatanodeProtocolProtos.java:18295)
=A0 = =A0 =A0 =A0 at org.apache.hadoop.ipc.Protob

On = Wed, Feb 20, 2013 at 9:10 PM, Jean-Marc Spaggiari <jean-marc@spaggia= ri.org> wrote:
Hi Nagarjuna,

Is it a test cluster? Do you have another cluster running close-by?
Also, is it your first try?

It seems there is some previous data in the dfs directory which is not
in sync with the last installation.

Maybe you can remove the content of
/Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20
if it's not usefull for you, reformat your node and restart it?

JM

2013/2/20, nagarjuna kanamarlapudi <nagarjuna.kanamarlapudi@gmail.com>= ;:
> Hi,
>
> I am trying to setup single node cluster of hadop 2.0.*
>
> When trying to start datanode I got the following error. Could anyone = help
> me out
>
> Block pool BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjun= a/
> 124.123.215.= 187:9000
> java.io.IOException: Incompatible clusterIDs in
> /Users/nagarjunak/Documents/hadoop-install/hadoop-2.0.3-alpha/tmp_20/d= fs/data:
> namenode clusterID =3D CID-800b7eb1-7a83-4649-86b7-617913e82ad8; datan= ode
> clusterID =3D CID-1740b490-8413-451c-926f-2f0676b217ec
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataSt= orage.java:391)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRe= ad(DataStorage.java:191)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRe= ad(DataStorage.java:219)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.j= ava:850)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode= .java:821)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetName= spaceInfo(BPOfferService.java:280)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHa= ndshake(BPServiceActor.java:222)
> =A0 =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceAct= or.java:664)
> =A0 =A0 =A0 =A0 at java.lang.Thread.run(Thread.java:680)
> 2013-02-20 21:03:39,856 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool ser= vice
> for: Block pool BP-1894309265-124.123.215.187-1361374377471 (storage i= d
> DS-1175433225-124.123.215.187-50010-1361374235895) service to nagarjun= a/
> 124.123.215.= 187:9000
> 2013-02-20 21:03:39,958 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool > BP-1894309265-124.123.215.187-1361374377471 (storage id
> DS-1175433225-124.123.215.187-50010-1361374235895)
> 2013-02-20 21:03:41,959 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2013-02-20 21:03:41,961 INFO org.apache.hadoop.util.ExitUtil: Exiting = with
> status 0
> 2013-02-20 21:03:41,963 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at nagarjuna/124.123.215.187
> ************************************************************/
>

--14dae934061d0787e004d629f154--