Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A05DCDF38 for ; Thu, 9 Aug 2012 11:01:27 +0000 (UTC) Received: (qmail 24235 invoked by uid 500); 9 Aug 2012 11:01:22 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 24123 invoked by uid 500); 9 Aug 2012 11:01:22 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 24107 invoked by uid 99); 9 Aug 2012 11:01:22 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 11:01:22 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,FSL_RCVD_USER,HTML_FONT_SIZE_LARGE,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of anand2sharma@gmail.com designates 74.125.82.176 as permitted sender) Received: from [74.125.82.176] (HELO mail-we0-f176.google.com) (74.125.82.176) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 11:01:14 +0000 Received: by weyu3 with SMTP id u3so214978wey.35 for ; Thu, 09 Aug 2012 04:00:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=CpV12IAJ2RrtoGpkWgYMD/2hnhkNSXFsPjAeGiRZ4i0=; b=viPEsYcxChBd/HthuYpZCLCe+1T7UzwCS5XW6+l9DI7NyS12UbFBlEJt3ZeTAvObGk sGTw8FG3xS1NikWe6zx+I2YVW1p4oqbaJEqQZts90+gCUjVKf7yVbXbEkv0YDvBX9yZy 9iPmsukyLR5aQqgNpI9DCSmgx37Z3+7GtmizEZkq8BxXr2lSrxmxLRfI59h4WPFD6iC3 rthMXtjRxARORUkP7XECVIbTFmmUsOLdM1+b7QtTGMgEYsUNREryQFIwZnOhZtZvwSl7 Yzw829AiWDJVz/UAHrL0anPJTNDmfDXoCUvbSaeR7CphP180RotIcY9GOn8Btk97U5Ja E1XA== MIME-Version: 1.0 Received: by 10.216.123.130 with SMTP id v2mr883491weh.117.1344510054123; Thu, 09 Aug 2012 04:00:54 -0700 (PDT) Received: by 10.217.6.13 with HTTP; Thu, 9 Aug 2012 04:00:54 -0700 (PDT) In-Reply-To: References: Date: Thu, 9 Aug 2012 16:30:54 +0530 Message-ID: Subject: Re: namenode instantiation error From: anand sharma To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e0cb4e6ff5952c499f04c6d324cc --e0cb4e6ff5952c499f04c6d324cc Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Thanks all for reply, yes the user has access to that directory and i have already formatted the namenode; just for simplicity i am not using ssh as i am doing things for the first time. On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv wrote: > format the filesystem > > bin/hadoop namenode -format > > then try to start namenode :) > > > On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq wrote= : > >> Hello Anand, >> >> Is there any specific reason behind not using ssh?? >> >> Regards, >> Mohammad Tariq >> >> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma >> wrote: >> > Hi, i am just learning the Hadoop and i am setting the development >> > environment with CDH3 pseudo distributed mode without any ssh >> cofiguration >> > in CentOS 6.2 . i can run the sample programs as usual but when i try >> and >> > run namenode this is the error it logs... >> > >> > [hive@localhost ~]$ hadoop namenode >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: >> > /************************************************************ >> > STARTUP_MSG: Starting NameNode >> > STARTUP_MSG: host =3D localhost.localdomain/127.0.0.1 >> > STARTUP_MSG: args =3D [] >> > STARTUP_MSG: version =3D 0.20.2-cdh3u4 >> > STARTUP_MSG: build =3D >> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon >> May >> > 7 14:01:59 PDT 2012 >> > ************************************************************/ >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with >> > processName=3DNameNode, sessionId=3Dnull >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing >> > NameNodeMeterics using context >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext >> > 12/08/09 20:56:57 INFO util.GSet: VM type =3D 64-bit >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory =3D 17.77875 MB >> > 12/08/09 20:56:57 INFO util.GSet: capacity =3D 2^21 =3D 2097152 e= ntries >> > 12/08/09 20:56:57 INFO util.GSet: recommended=3D2097152, actual=3D2097= 152 >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=3Dhive (auth:SIM= PLE) >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=3Dsupergroup >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=3Dfa= lse >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: >> > dfs.block.invalidate.limit=3D1000 >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=3Df= alse >> > accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 min(s) >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing >> > FSNamesystemMetrics using context >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem >> initialization >> > failed. >> > java.io.FileNotFoundException: >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission >> denied) >> > at java.io.RandomAccessFile.open(Native Method) >> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) >> > at >> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(St= orage.java:614) >> > at >> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Stora= ge.java:591) >> > at >> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeSto= rage(Storage.java:449) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSI= mage.java:304) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirecto= ry.java:110) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesys= tem.java:372) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.= java:335) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java= :271) >> > at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467= ) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.= java:1330) >> > at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) >> > 12/08/09 20:56:57 ERROR namenode.NameNode: >> java.io.FileNotFoundException: >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission >> denied) >> > at java.io.RandomAccessFile.open(Native Method) >> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) >> > at >> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(St= orage.java:614) >> > at >> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Stora= ge.java:591) >> > at >> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeSto= rage(Storage.java:449) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSI= mage.java:304) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirecto= ry.java:110) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesys= tem.java:372) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.= java:335) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java= :271) >> > at >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467= ) >> > at >> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.= java:1330) >> > at >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) >> > >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: >> > /************************************************************ >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.= 1 >> > ************************************************************/ >> > >> > >> > > > > -- > > > =E2=88=9E > Shashwat Shriparv > > > --e0cb4e6ff5952c499f04c6d324cc Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Thanks all for reply, yes the user has access to that directory and i have = already formatted the namenode; just for simplicity i am not using ssh as i= am doing things for the first time.

On T= hu, Aug 9, 2012 at 3:58 PM, shashwat shriparv <dwivedishashwat@gma= il.com> wrote:
format the filesystem

bin/hadoop name= node -format

then try to start namenode :)


On Thu, Aug 9, 2012 at 3:51 PM, Mohammad= Tariq <dontariq@gmail.com> wrote:
Hello Anand,

=C2=A0 =C2=A0 Is there any specific reason behind not using ssh??

Regards,
=C2=A0 =C2=A0 Mohammad Tariq


On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com> wrote:
> Hi, i am just learning the Hadoop and i am setting the= development
> environment with CDH3 pseudo distributed mode without any ssh cofigura= tion
> in CentOS 6.2 . i can run the sample programs as usual but when i try = and
> run namenode this is the error it logs...
>
> [hive@localhost ~]$ hadoop namenode
> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG: =C2=A0 host =3D localhost.localdomain/127.0.0.1
> STARTUP_MSG: =C2=A0 args =3D []
> STARTUP_MSG: =C2=A0 version =3D 0.20.2-cdh3u4
> STARTUP_MSG: =C2=A0 build =3D file:///data/1/tmp/topdir/BUILD/hadoop-0= .20.2-cdh3u4
> -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root'= ; on Mon May
> 7 14:01:59 PDT 2012
> ************************************************************/
> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with > processName=3DNameNode, sessionId=3Dnull
> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> NameNodeMeterics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 INFO util.GSet: VM type =C2=A0 =C2=A0 =C2=A0 =3D 64-= bit
> 12/08/09 20:56:57 INFO util.GSet: 2% max memory =3D 17.77875 MB
> 12/08/09 20:56:57 INFO util.GSet: capacity =C2=A0 =C2=A0 =C2=A0=3D 2^2= 1 =3D 2097152 entries
> 12/08/09 20:56:57 INFO util.GSet: recommended=3D2097152, actual=3D2097= 152
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=3Dhive (auth:SIM= PLE)
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=3Dsupergroup<= br> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=3Dfa= lse
> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> dfs.block.invalidate.limit=3D1000
> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=3Df= alse
> accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 min(s)
> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> FSNamesystemMetrics using context
> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initializa= tion
> failed.
> java.io.FileNotFoundException:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission den= ied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216) > at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(= Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Sto= rage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeS= torage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(F= SImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirec= tory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNames= ystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNam= esystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.ja= va:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNo= de.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNod= e.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:= 1339)
> 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundExcepti= on:
> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission den= ied)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216) > at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(= Storage.java:614)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Sto= rage.java:591)
> at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeS= torage(Storage.java:449)
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(F= SImage.java:304)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirec= tory.java:110)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNames= ystem.java:372)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNam= esystem.java:335)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.ja= va:271)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNo= de.java:467)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNod= e.java:1330)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:= 1339)
>
> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
>



--
=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20 =09 =09 =09 =09

=E2=88=9E

Shashwat Shriparv



--e0cb4e6ff5952c499f04c6d324cc--