Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 581BED407 for ; Fri, 10 Aug 2012 04:06:36 +0000 (UTC) Received: (qmail 43340 invoked by uid 500); 10 Aug 2012 04:06:31 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 43189 invoked by uid 500); 10 Aug 2012 04:06:31 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 43172 invoked by uid 99); 10 Aug 2012 04:06:30 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Aug 2012 04:06:30 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of anand2sharma@gmail.com designates 209.85.212.176 as permitted sender) Received: from [209.85.212.176] (HELO mail-wi0-f176.google.com) (209.85.212.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Aug 2012 04:06:24 +0000 Received: by wibhn17 with SMTP id hn17so807706wib.11 for ; Thu, 09 Aug 2012 21:06:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=RDAVLjWLmfQxMnHFKO2qBOdDo8AiQwKPda+SO3+n2mk=; b=GZ/KcwUW8OpxdXYS8jEzanJdYeJ7WWj1QP34HOg9fDX2mCQ0N7OqZwKC7OngJad0bN cqFuEpaisN0oOlzL4D5QUtqh6U3rYzTVH5DRKLZkueKFuv6hZIhjG6cR58z94DVNtT6b AJskAJeU8hgTmVwriM7uTttJEGNlg4ir4YUcxZiB+nEtsy1Nv56niB7b5N5CWf1eZZJ/ Vb5FtJcDbgJW68ZXjTz/OnoEmDtoZIyCT/h1GHsj6Me4nIojJC7kPjjNMs7yHDt7DQDo tvqKepc9j6EDJ2d1tmwgR44KLmDRRs4qjXNqG26gk/EQDDw70EKJqWRd9+N+/shl6Dhy 3+Iw== MIME-Version: 1.0 Received: by 10.180.97.135 with SMTP id ea7mr2621646wib.11.1344571562988; Thu, 09 Aug 2012 21:06:02 -0700 (PDT) Received: by 10.217.6.13 with HTTP; Thu, 9 Aug 2012 21:06:02 -0700 (PDT) In-Reply-To: <22A11209-89E9-4699-BFB5-FE15EF423B9C@gmail.com> References: <22A11209-89E9-4699-BFB5-FE15EF423B9C@gmail.com> Date: Fri, 10 Aug 2012 09:36:02 +0530 Message-ID: Subject: Re: namenode instantiation error From: anand sharma To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d043bdb4e63140104c6e1760f X-Virus-Checked: Checked by ClamAV on apache.org --f46d043bdb4e63140104c6e1760f Content-Type: text/plain; charset=ISO-8859-1 its false... Abhishek dfs.permissions false dfs.name.dir /var/lib/hadoop-0.20/cache/hadoop/dfs/name On Thu, Aug 9, 2012 at 6:29 PM, Abhishek wrote: > Hi Anand, > > What are the permissions, on dfs.name.dir directory in hdfs-site.xml > > Regards > Abhishek > > > Sent from my iPhone > > On Aug 9, 2012, at 8:41 AM, anand sharma wrote: > > yea Tariq !1 its a fresh installation i m doing it for the first time, > hope someone will know the error code and the reason of error. > > On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq wrote: > >> Hi Anand, >> >> Have you tried any other Hadoop distribution or version also??In >> that case first remove the older one and start fresh. >> >> Regards, >> Mohammad Tariq >> >> >> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq >> wrote: >> > Hello Rahul, >> > >> > That's great. That's the best way to learn(I am doing the same :) >> > ). Since the installation part is over, I would suggest to get >> > yourself familiar with Hdfs and MapReduce first. Try to do basic >> > filesystem operations using the Hdfs API and run the wordcount >> > program, if you haven't done it yet. Then move ahead. >> > >> > Regards, >> > Mohammad Tariq >> > >> > >> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p >> wrote: >> >> Hi Tariq, >> >> >> >> I am also new to Hadoop trying to learn my self can anyone help me on >> the >> >> same. >> >> i have installed CDH3. >> >> >> >> >> >> >> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq >> wrote: >> >>> >> >>> Hello Anand, >> >>> >> >>> Is there any specific reason behind not using ssh?? >> >>> >> >>> Regards, >> >>> Mohammad Tariq >> >>> >> >>> >> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma >> >>> wrote: >> >>> > Hi, i am just learning the Hadoop and i am setting the development >> >>> > environment with CDH3 pseudo distributed mode without any ssh >> >>> > cofiguration >> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i >> try >> >>> > and >> >>> > run namenode this is the error it logs... >> >>> > >> >>> > [hive@localhost ~]$ hadoop namenode >> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: >> >>> > /************************************************************ >> >>> > STARTUP_MSG: Starting NameNode >> >>> > STARTUP_MSG: host = localhost.localdomain/127.0.0.1 >> >>> > STARTUP_MSG: args = [] >> >>> > STARTUP_MSG: version = 0.20.2-cdh3u4 >> >>> > STARTUP_MSG: build = >> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 >> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on >> Mon >> >>> > May >> >>> > 7 14:01:59 PDT 2012 >> >>> > ************************************************************/ >> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with >> >>> > processName=NameNode, sessionId=null >> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing >> >>> > NameNodeMeterics using context >> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext >> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type = 64-bit >> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB >> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity = 2^21 = 2097152 >> entries >> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, >> actual=2097152 >> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive >> (auth:SIMPLE) >> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup >> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: >> isPermissionEnabled=false >> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: >> >>> > dfs.block.invalidate.limit=1000 >> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: >> isAccessTokenEnabled=false >> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) >> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing >> >>> > FSNamesystemMetrics using context >> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext >> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem >> >>> > initialization >> >>> > failed. >> >>> > java.io.FileNotFoundException: >> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission >> >>> > denied) >> >>> > at java.io.RandomAccessFile.open(Native Method) >> >>> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) >> >>> > at >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) >> >>> > at >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) >> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode: >> >>> > java.io.FileNotFoundException: >> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission >> >>> > denied) >> >>> > at java.io.RandomAccessFile.open(Native Method) >> >>> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) >> >>> > at >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) >> >>> > at >> >>> > >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) >> >>> > at >> >>> > >> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) >> >>> > >> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: >> >>> > /************************************************************ >> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/ >> 127.0.0.1 >> >>> > ************************************************************/ >> >>> > >> >>> > >> >> >> >> >> > > --f46d043bdb4e63140104c6e1760f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable its false... Abhishek

=A0<property>
=A0 =A0 =A0<name>dfs.permissions</name>
=A0 =A0 =A0&= lt;value>false</value>
=A0 </property>
<= br>
<property>
=A0 =A0 =A0<!-- specify this so that run= ning 'hadoop namenode -format' formats the right dir -->
=A0 =A0 =A0<name>dfs.name.dir</name>
=A0 =A0 =A0&l= t;value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
=A0 </property>


On Thu, Aug 9, 2012 at 6:29 PM, Abhishek <abhishek.dodda1= @gmail.com> wrote:
Hi Anand,

What are the permissions, on dfs.name.dir directory in= hdfs-site.xml

Regards
Abhishek=A0

=
Sent from my iPhone

On Aug 9, 2012= , at 8:41 AM, anand sharma <anand2sharma@gmail.com> wrote:

yea=A0 Tariq=A0!1 its a fresh=A0installation=A0i m doing it for the first time, ho= pe someone will know the error code and the reason of error.

On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <donta= riq@gmail.com> wrote:
Hi Anand,

=A0 =A0 =A0 Have you tried any other Hadoop distribution or version also??I= n
that case first remove the older one and start fresh.

Regards,
=A0 =A0 Mohammad Tariq


On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
> Hello Rahul,
>
> =A0 =A0That's great. That's the best way to learn(I am doing t= he same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first. Try to do basic
> filesystem operations using the Hdfs API and run the wordcount
> program, if you haven't done it yet. Then move ahead.
>
> Regards,
> =A0 =A0 Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 5:20 PM, rahul p <rahulpoolanchalil@gmail.com> = wrote:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying to learn my self can anyone help me= on the
>> same.
>> i have installed CDH3.
>>
>>
>>
>> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <dontariq@gmail.com> wrote:<= br> >>>
>>> Hello Anand,
>>>
>>> =A0 =A0 Is there any specific reason behind not using ssh?? >>>
>>> Regards,
>>> =A0 =A0 Mohammad Tariq
>>>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com&g= t;
>>> wrote:
>>> > Hi, i am just learning the Hadoop and i am setting the de= velopment
>>> > environment with CDH3 pseudo distributed mode without any= ssh
>>> > cofiguration
>>> > in CentOS 6.2 . i can run the sample programs as usual bu= t when i try
>>> > and
>>> > run namenode this is the error it logs...
>>> >
>>> > [hive@localhost ~]$ hadoop namenode
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: >>> > /********************************************************= ****
>>> > STARTUP_MSG: Starting NameNode
>>> > STARTUP_MSG: =A0 host =3D localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG: =A0 args =3D []
>>> > STARTUP_MSG: =A0 version =3D 0.20.2-cdh3u4
>>> > STARTUP_MSG: =A0 build =3D
>>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by = 'root' on Mon
>>> > May
>>> > 7 14:01:59 PDT 2012
>>> > *********************************************************= ***/
>>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM M= etrics with
>>> > processName=3DNameNode, sessionId=3Dnull
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializ= ing
>>> > NameNodeMeterics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext=
>>> > 12/08/09 20:56:57 INFO util.GSet: VM type =A0 =A0 =A0 =3D= 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory =3D 17.77= 875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity =A0 =A0 =A0=3D= 2^21 =3D 2097152 entries
>>> > 12/08/09 20:56:57 INFO util.GSet: recommended=3D2097152, = actual=3D2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=3Dh= ive (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup= =3Dsupergroup
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissio= nEnabled=3Dfalse
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>> > dfs.block.invalidate.limit=3D1000
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTok= enEnabled=3Dfalse
>>> > accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime= =3D0 min(s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initi= alizing
>>> > FSNamesystemMetrics using context
>>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext=
>>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesyst= em
>>> > initialization
>>> > failed.
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (P= ermission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile= .java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirec= tory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirec= tory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirec= tory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTra= nsitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFS= Image(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initi= alize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<i= nit>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initializ= e(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init&= gt;(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNam= eNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(Name= Node.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>> > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (P= ermission
>>> > denied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>>> > at java.io.RandomAccessFile.<init>(RandomAccessFile= .java:216)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirec= tory.tryLock(Storage.java:614)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirec= tory.lock(Storage.java:591)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.common.Storage$StorageDirec= tory.analyzeStorage(Storage.java:449)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTra= nsitionRead(FSImage.java:304)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFS= Image(FSDirectory.java:110)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initi= alize(FSNamesystem.java:372)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<i= nit>(FSNamesystem.java:335)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.initializ= e(NameNode.java:271)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init&= gt;(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.createNam= eNode(NameNode.java:1330)
>>> > at
>>> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(Name= Node.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: >>> > /********************************************************= ****
>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdo= main/127.0.0.1
>>> > *********************************************************= ***/
>>> >
>>> >
>>
>>


--f46d043bdb4e63140104c6e1760f--