Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6E61E9ED1 for ; Thu, 9 Aug 2012 12:57:48 +0000 (UTC) Received: (qmail 35921 invoked by uid 500); 9 Aug 2012 12:57:43 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 35733 invoked by uid 500); 9 Aug 2012 12:57:43 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 35725 invoked by uid 99); 9 Aug 2012 12:57:43 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 12:57:43 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of rahulpoolanchalil@gmail.com designates 209.85.217.176 as permitted sender) Received: from [209.85.217.176] (HELO mail-lb0-f176.google.com) (209.85.217.176) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 12:57:34 +0000 Received: by lboi15 with SMTP id i15so276497lbo.35 for ; Thu, 09 Aug 2012 05:57:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=wRMwfB7DUlXDXqytPGJXz4CQWq93EQqa6DWoSlYCx7w=; b=bMnGAEESxV138tNEJtmaIxHOZvkbQvS5Qm4ZVGYkue4roHxE7/Xq1lFhzAXCKfwTgw 184sADKYcCUORpHJyOklSSOzBUuSoXhB5qmc7KoGBQUI/igQc9omd9XBbvmmAd3Aq8f9 9e8Ravjn3WXLv1Q/WOTFLX9VAZJW89WVVRqHuGxdJQoeYfiQUJ4rEsZ+X7hSKzm5XQC5 tNDRP3WuryxcCkTXh0rIAt8Ig0QI51T7AAuHSmrYoHuP6hObCtwXWplKypUka7RQ3IkP kuhvvR3RMBKj9zqxQttlJKCWQ2NU8I2ozsUxTacax95mW9F5rYSj/hrxrjTnWoxPmdWL EpNQ== MIME-Version: 1.0 Received: by 10.112.85.35 with SMTP id e3mr770421lbz.90.1344517034357; Thu, 09 Aug 2012 05:57:14 -0700 (PDT) Received: by 10.112.8.39 with HTTP; Thu, 9 Aug 2012 05:57:14 -0700 (PDT) In-Reply-To: References: Date: Thu, 9 Aug 2012 20:57:14 +0800 Message-ID: Subject: Re: namenode instantiation error From: rahul p To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec555557c3a33b304c6d4c412 --bcaec555557c3a33b304c6d4c412 Content-Type: text/plain; charset=ISO-8859-1 Thanks Tariq, let me start with that. On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq wrote: > Hello Rahul, > > That's great. That's the best way to learn(I am doing the same :) > ). Since the installation part is over, I would suggest to get > yourself familiar with Hdfs and MapReduce first. Try to do basic > filesystem operations using the Hdfs API and run the wordcount > program, if you haven't done it yet. Then move ahead. > > Regards, > Mohammad Tariq > > > On Thu, Aug 9, 2012 at 5:20 PM, rahul p > wrote: > > Hi Tariq, > > > > I am also new to Hadoop trying to learn my self can anyone help me on the > > same. > > i have installed CDH3. > > > > > > > > On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq > wrote: > >> > >> Hello Anand, > >> > >> Is there any specific reason behind not using ssh?? > >> > >> Regards, > >> Mohammad Tariq > >> > >> > >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma > >> wrote: > >> > Hi, i am just learning the Hadoop and i am setting the development > >> > environment with CDH3 pseudo distributed mode without any ssh > >> > cofiguration > >> > in CentOS 6.2 . i can run the sample programs as usual but when i try > >> > and > >> > run namenode this is the error it logs... > >> > > >> > [hive@localhost ~]$ hadoop namenode > >> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: > >> > /************************************************************ > >> > STARTUP_MSG: Starting NameNode > >> > STARTUP_MSG: host = localhost.localdomain/127.0.0.1 > >> > STARTUP_MSG: args = [] > >> > STARTUP_MSG: version = 0.20.2-cdh3u4 > >> > STARTUP_MSG: build = > >> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 > >> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon > >> > May > >> > 7 14:01:59 PDT 2012 > >> > ************************************************************/ > >> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with > >> > processName=NameNode, sessionId=null > >> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing > >> > NameNodeMeterics using context > >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > >> > 12/08/09 20:56:57 INFO util.GSet: VM type = 64-bit > >> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB > >> > 12/08/09 20:56:57 INFO util.GSet: capacity = 2^21 = 2097152 > entries > >> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152 > >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive > (auth:SIMPLE) > >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup > >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > isPermissionEnabled=false > >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > >> > dfs.block.invalidate.limit=1000 > >> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > isAccessTokenEnabled=false > >> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) > >> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing > >> > FSNamesystemMetrics using context > >> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > >> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem > >> > initialization > >> > failed. > >> > java.io.FileNotFoundException: > >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission > >> > denied) > >> > at java.io.RandomAccessFile.open(Native Method) > >> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) > >> > at > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) > >> > at > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) > >> > 12/08/09 20:56:57 ERROR namenode.NameNode: > >> > java.io.FileNotFoundException: > >> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission > >> > denied) > >> > at java.io.RandomAccessFile.open(Native Method) > >> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) > >> > at > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) > >> > at > >> > > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) > >> > at > >> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) > >> > > >> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: > >> > /************************************************************ > >> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/ > 127.0.0.1 > >> > ************************************************************/ > >> > > >> > > > > > > --bcaec555557c3a33b304c6d4c412 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Thanks Tariq,
let me start with that.=A0

On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
Hello Rahul,

=A0 =A0That's great. That's the best way to learn(I am doing the sa= me :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't done it yet. Then move ahead.

Regards,
=A0 =A0 Mohammad Tariq


On Thu, Aug 9, 2012 at 5:20 PM, rahul p <rahulpoolanchalil@gmail.com> wrote= :
> Hi Tariq,
>
> I am also new to Hadoop trying to learn my self can anyone help me on = the
> same.
> i have installed CDH3.
>
>
>
> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
>>
>> Hello Anand,
>>
>> =A0 =A0 Is there any specific reason behind not using ssh??
>>
>> Regards,
>> =A0 =A0 Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com> >> wrote:
>> > Hi, i am just learning the Hadoop and i am setting the develo= pment
>> > environment with CDH3 pseudo distributed mode without any ssh=
>> > cofiguration
>> > in CentOS 6.2 . i can run the sample programs as usual but wh= en i try
>> > and
>> > run namenode this is the error it logs...
>> >
>> > [hive@localhost ~]$ hadoop namenode
>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> > /************************************************************=
>> > STARTUP_MSG: Starting NameNode
>> > STARTUP_MSG: =A0 host =3D localhost.localdomain/127.0.0.1
>> > STARTUP_MSG: =A0 args =3D []
>> > STARTUP_MSG: =A0 version =3D 0.20.2-cdh3u4
>> > STARTUP_MSG: =A0 build =3D
>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by '= ;root' on Mon
>> > May
>> > 7 14:01:59 PDT 2012
>> > ************************************************************/=
>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metri= cs with
>> > processName=3DNameNode, sessionId=3Dnull
>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing<= br> >> > NameNodeMeterics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 INFO util.GSet: VM type =A0 =A0 =A0 =3D 64-= bit
>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory =3D 17.77875 = MB
>> > 12/08/09 20:56:57 INFO util.GSet: capacity =A0 =A0 =A0=3D 2^2= 1 =3D 2097152 entries
>> > 12/08/09 20:56:57 INFO util.GSet: recommended=3D2097152, actu= al=3D2097152
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=3Dhive = (auth:SIMPLE)
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=3Dsu= pergroup
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEna= bled=3Dfalse
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> > dfs.block.invalidate.limit=3D1000
>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEn= abled=3Dfalse
>> > accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 m= in(s)
>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializ= ing
>> > FSNamesystemMetrics using context
>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem >> > initialization
>> > failed.
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permi= ssion
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.jav= a:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory= .tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory= .lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory= .analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransit= ionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImag= e(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initializ= e(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init&= gt;(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(Na= meNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(= NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNod= e(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode= .java:1339)
>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> > java.io.FileNotFoundException:
>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permi= ssion
>> > denied)
>> > at java.io.RandomAccessFile.open(Native Method)
>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.jav= a:216)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory= .tryLock(Storage.java:614)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory= .lock(Storage.java:591)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory= .analyzeStorage(Storage.java:449)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransit= ionRead(FSImage.java:304)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImag= e(FSDirectory.java:110)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initializ= e(FSNamesystem.java:372)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init&= gt;(FSNamesystem.java:335)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(Na= meNode.java:271)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(= NameNode.java:467)
>> > at
>> >
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNod= e(NameNode.java:1330)
>> > at
>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode= .java:1339)
>> >
>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> > /************************************************************=
>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain= /127.0.0.1
>> > ************************************************************/=
>> >
>> >
>
>

--bcaec555557c3a33b304c6d4c412--