Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7C223D111 for ; Thu, 9 Aug 2012 10:16:53 +0000 (UTC) Received: (qmail 62845 invoked by uid 500); 9 Aug 2012 10:16:48 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 62611 invoked by uid 500); 9 Aug 2012 10:16:47 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 62590 invoked by uid 99); 9 Aug 2012 10:16:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 10:16:47 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of anand2sharma@gmail.com designates 74.125.82.48 as permitted sender) Received: from [74.125.82.48] (HELO mail-wg0-f48.google.com) (74.125.82.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 10:16:40 +0000 Received: by wgbdq11 with SMTP id dq11so233823wgb.29 for ; Thu, 09 Aug 2012 03:16:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=/FXy0w6OsnQXIYUgbVd1a/3D1HwG7cTCr9JWZT+/spI=; b=A7VmoIhICsu3NAGoHscxTdEUV7/sxP2sXS70P9zZSM0jpb2zG6QmTUI28wYuj3zk5p l4qLDeQ0uaV6vnLW9amiNj8IB1YmmNJQa0e8NGvp3S69zAcVHXwa4PGhYY6ihRRHmH2d SKBZspHjyzyD8OYXSP0ynoN9/JCuH5Zsi1N2GKAWMslNn4LzgRkGP9fSPL4I6Zclt20A Cd76DYtxpAAk5IOi/hlL89Cjd+R5nE6YNS1mfiWKXqAU5gGe8W3WcF98CR9KmIk8oYTc IGrs6GJECGfzKwXlsleJpRkH9D9YJKzR1q2DqI0C03A5HVGMStTzjf8UoqO/g6mRnHoG x4AA== MIME-Version: 1.0 Received: by 10.180.78.135 with SMTP id b7mr1616353wix.11.1344507379334; Thu, 09 Aug 2012 03:16:19 -0700 (PDT) Received: by 10.217.6.13 with HTTP; Thu, 9 Aug 2012 03:16:19 -0700 (PDT) Date: Thu, 9 Aug 2012 15:46:19 +0530 Message-ID: Subject: namenode instantiation error From: anand sharma To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d043892a1be3d7004c6d28428 X-Virus-Checked: Checked by ClamAV on apache.org --f46d043892a1be3d7004c6d28428 Content-Type: text/plain; charset=ISO-8859-1 Hi, i am just learning the Hadoop and i am setting the development environment with CDH3 pseudo distributed mode without any ssh cofiguration in CentOS 6.2 . i can run the sample programs as usual but when i try and run namenode this is the error it logs... [hive@localhost ~]$ hadoop namenode 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = localhost.localdomain/127.0.0.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.2-cdh3u4 STARTUP_MSG: build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May 7 14:01:59 PDT 2012 ************************************************************/ 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext 12/08/09 20:56:57 INFO util.GSet: VM type = 64-bit 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB 12/08/09 20:56:57 INFO util.GSet: capacity = 2^21 = 2097152 entries 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE) 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false 12/08/09 20:56:57 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=1000 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization failed. java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.(RandomAccessFile.java:216) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.(RandomAccessFile.java:216) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/ --f46d043892a1be3d7004c6d28428 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi, i am just learning the Hadoop and i am setting the=A0development=A0envi= ronment with CDH3 pseudo distributed mode without any ssh cofiguration=A0 i= n CentOS 6.2 . i can run the=A0sample=A0programs as usual but when i try an= d run namenode this is the error it logs...=A0

[hive@localhost ~]$ hadoop namenode
12/08/09 = 20:56:57 INFO namenode.NameNode: STARTUP_MSG:=A0
/***************= *********************************************
STARTUP_MSG: Starti= ng NameNode
STARTUP_MSG: =A0 host =3D localhost.localdomain/127.0.0.1
STARTUP_MSG: =A0 args =3D []
START= UP_MSG: =A0 version =3D 0.20.2-cdh3u4
STARTUP_MSG: =A0 build =3D = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb5= 5988d3f47dd9e248c5690; compiled by 'root' on Mon May =A07 14:01:59 = PDT 2012
************************************************************/
12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with proc= essName=3DNameNode, sessionId=3Dnull
12/08/09 20:56:57 INFO metri= cs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.= apache.hadoop.metrics.spi.NoEmitMetricsContext
12/08/09 20:56:57 INFO util.GSet: VM type =A0 =A0 =A0 =3D 64-bit
=
12/08/09 20:56:57 INFO util.GSet: 2% max memory =3D 17.77875 MB
<= div>12/08/09 20:56:57 INFO util.GSet: capacity =A0 =A0 =A0=3D 2^21 =3D 2097= 152 entries
12/08/09 20:56:57 INFO util.GSet: recommended=3D2097152, actual=3D2097= 152
12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=3Dhive = (auth:SIMPLE)
12/08/09 20:56:57 INFO namenode.FSNamesystem: super= group=3Dsupergroup
12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=3Dfa= lse
12/08/09 20:56:57 INFO namenode.FSNamesystem: dfs.block.inval= idate.limit=3D1000
12/08/09 20:56:57 INFO namenode.FSNamesystem: = isAccessTokenEnabled=3Dfalse accessKeyUpdateInterval=3D0 min(s), accessToke= nLifetime=3D0 min(s)
12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing FSNam= esystemMetrics using context object:org.apache.hadoop.metrics.spi.NoEmitMet= ricsContext
12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSName= system initialization failed.
java.io.FileNotFoundException: /var/lib/hadoop-0.20/cache/hadoop/dfs/n= ame/in_use.lock (Permission denied)
at java.io.RandomAccessFile.open(Native= Method)
at ja= va.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apach= e.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:6= 14)
at or= g.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.ja= va:591)
= at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.an= alyzeStorage(Storage.java:449)
at or= g.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.= java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage= (FSDirectory.java:110)
at or= g.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.j= ava:372)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>= ;(FSNamesystem.java:335)
at or= g.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)=
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.j= ava:467)
at or= g.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:= 1330)
<= /span>at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java= :1339)
12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundExcepti= on: /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission deni= ed)
at java.io.RandomAccessFile.open(Native Method)
at ja= va.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apach= e.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:6= 14)
at or= g.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.ja= va:591)
= at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.an= alyzeStorage(Storage.java:449)
at or= g.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.= java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage= (FSDirectory.java:110)
at or= g.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.j= ava:372)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>= ;(FSNamesystem.java:335)
at or= g.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)=
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.j= ava:467)
at or= g.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:= 1330)
<= /span>at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java= :1339)

12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:= =A0
/************************************************************=
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/


--f46d043892a1be3d7004c6d28428--