Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C7712DAAD for ; Wed, 8 Aug 2012 07:57:29 +0000 (UTC) Received: (qmail 51935 invoked by uid 500); 8 Aug 2012 07:57:24 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 51833 invoked by uid 500); 8 Aug 2012 07:57:24 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 51803 invoked by uid 99); 8 Aug 2012 07:57:23 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Aug 2012 07:57:23 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of anilgupta84@gmail.com designates 209.85.160.176 as permitted sender) Received: from [209.85.160.176] (HELO mail-gh0-f176.google.com) (209.85.160.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 08 Aug 2012 07:57:17 +0000 Received: by ghbz10 with SMTP id z10so552448ghb.35 for ; Wed, 08 Aug 2012 00:56:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=D/3RK1Uva6lKOtXwH5ZSAAcwZqFjT41m148JQywWGEA=; b=vz8+bLkiU3ka+ztLB3IlOPqnuh0S15HF0S8RC8kYPQKJ0NkSyX5kCabbqklnW2Mubc HY/BbQVqLBY9CLaCqgg5aiqqV3IhqxwvJCcDqFVgpHibhArplqaeTIWfIybAN8HMkyb2 rr0FaCF53s2Xf9NsurQ9p5qfdU7BBXjCeMzulPMiLdkSJJ/9LawLUq1LEwEH1hMaSref VHPXNKlvzIsw93Oielf2DV049ZF/5y7V6b9T7FeHqOWG/2rEjckHqK6HdzJ0PH9DBrGE m/OLlSwvXzF7wuVpCOiAr1xAP8c770fRvFohpyyMIYV3wZHkktdM0VtykFqxT3tDeoPE VKQg== Received: by 10.50.188.130 with SMTP id ga2mr79213igc.32.1344412616442; Wed, 08 Aug 2012 00:56:56 -0700 (PDT) MIME-Version: 1.0 Received: by 10.64.63.12 with HTTP; Wed, 8 Aug 2012 00:56:35 -0700 (PDT) In-Reply-To: References: From: anil gupta Date: Wed, 8 Aug 2012 00:56:35 -0700 Message-ID: Subject: Re: Data node error To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=14dae934086b6f411804c6bc74cc X-Virus-Checked: Checked by ClamAV on apache.org --14dae934086b6f411804c6bc74cc Content-Type: text/plain; charset=ISO-8859-1 Hi Prabhu, Did you clean the data dir on DataNodes? Whenever Namenode is formated the data directories of Datanodes needs to be cleaned up. As far as i remember it's the directory which you mention in dfs.data.dir in hdfs-site.xml file.You can do a google search for the error and you can get more details. (Sorry, i dont have access to my cluster conf right now for telling you the exact property). Thanks, Anil On Wed, Aug 8, 2012 at 12:49 AM, prabhu K wrote: > Hi Users, > > I have formatted hadoop cluster, formatted successfully. after stop &start > hadoop, i hit the jps command in master, getting fine, but in slave machine > am not getting data node, while see the data node log file, i am getting > following error. > > Data node(slave1): > 2012-08-08 00:16:44,033 WARN > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Second > Verification failed for blk_-3831635302961953167_1690. Exception : java.io > . > IOException: Block blk_-3831635302961953167_1690 is not valid. > at > org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:1072) > at > org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1035) > at > org.apache.hadoop.hdfs.server.datanode.FSDataset.getVisibleLength(FSDataset.java:1045) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:94) > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:81) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifyBlock(DataBlockScanner.java:453) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.verifyFirstBlock(DataBlockScanner.java:519) > at > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockScanner.java:617) > at java.lang.Thread.run(Thread.java:662) > > *data node(slave2)* > 2012-08-08 13:03:50,195 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: > Incompatible namespaceIDs in /app/hadoop/tmp/dfs/data: namenode name > spaceID = 1434906924; datanode namespaceID = 474761520 > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) > at > org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:299) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682) > > Please help me on this issue. > > Thanks, > Prabhu. > -- Thanks & Regards, Anil Gupta --14dae934086b6f411804c6bc74cc Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi Prabhu,

Did you clean the data dir on DataNodes? Whenever Namenod= e is formated the data directories of Datanodes needs to be cleaned up. As = far as i remember it's the directory which you mention in dfs.data.dir = in hdfs-site.xml file.You can do a google search for the error and you can = get more details.
(Sorry, i dont have access to my cluster conf right now for telling you the= exact property).

Thanks,
Anil

On Wed, Aug 8, 2012 at 12:49 AM, prabhu K <prabhu.hadoop@gmail.com<= /a>> wrote:
Hi Users,
=A0
I have formatted hadoop cluster, formatted successfully. after stop &a= mp;start hadoop, i hit the jps command in master, getting fine, but in slav= e machine am not getting data node, while see the data node log file, i am = getting following error.
=A0
Data node(slave1):
2012-08-08 00:16:44,033 WARN org.apache.hadoop.hdfs.server.datanode.Da= taBlockScanner: Second Verification failed for blk_-3831635302961953167_169= 0. Exception : java.io. IOException: Block blk_-3831635302961953167_1690 is not valid.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.FSDataset.g= etBlockFile(FSDataset.java:1072)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1035)
=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.FSDataset.getVisi= bleLength(FSDataset.java:1045)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.BlockSender= .<init>(BlockSender.java:94)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.h= adoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:81)=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataBlockS= canner.verifyBlock(DataBlockScanner.java:453)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataBlockSc= anner.verifyFirstBlock(DataBlockScanner.java:519)
=A0=A0=A0=A0=A0=A0=A0 = at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.run(DataBlockSca= nner.java:617)
=A0=A0=A0=A0=A0=A0=A0 at java.lang.Thread.run(Thread.java= :662)

data node(slave2)<= /font>
2012-08-08 13:03:50,195 ERROR org.apache.hadoop.hdfs.server.datanode.D= ataNode: java.io.IOException: Incompatible namespaceIDs in /app/hadoop/tmp/= dfs/data: namenode name
spaceID =3D 1434906924; datanode namespaceID =3D= 474761520
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataStorage= .doTransition(DataStorage.java:232)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.= hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.j= ava:147)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode= .DataNode.startDataNode(DataNode.java:385)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.&l= t;init>(DataNode.java:299)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop= .hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.instanti= ateDataNode(DataNode.java:1521)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.cr= eateDataNode(DataNode.java:1539)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode.main(Dat= aNode.java:1682)
=A0
Please help me on this issue.
=A0
Thanks,
Prabhu.



--
Thanks & Regards,Anil Gupta
--14dae934086b6f411804c6bc74cc--