Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 55F49DAFC for ; Fri, 10 Aug 2012 11:00:29 +0000 (UTC) Received: (qmail 5318 invoked by uid 500); 10 Aug 2012 11:00:24 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 5189 invoked by uid 500); 10 Aug 2012 11:00:22 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 5167 invoked by uid 99); 10 Aug 2012 11:00:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Aug 2012 11:00:21 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of anand2sharma@gmail.com designates 74.125.82.48 as permitted sender) Received: from [74.125.82.48] (HELO mail-wg0-f48.google.com) (74.125.82.48) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Aug 2012 11:00:16 +0000 Received: by wgbdq11 with SMTP id dq11so1187704wgb.29 for ; Fri, 10 Aug 2012 03:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=VQpfPnu5t4oGue4QCQG+IvRxXBYcS8r/Tyud40MgLk8=; b=Xg6OaT1QddTIDs7OvPeC5G1oYV92uqdW5CXnyVu41T/B2/iJR30963Cu4uVGVRjTex cC5EFmGFlj8jTDfIdWZwaO5oHk7emRGi3dL5BpUlqT0ne9vaPZ0WyAozLNr1TmeaxEzq JvOFAO5GX+Nl6P6usC3c7Az/FQOkDfc7njxa98NQ0J6vdt8Uovlr4idrIWxjm+AVeXWC SJjhoa3hSuzKcrRc2V40QFe+8Vuhc/+KvqUllRhdur92RXJHoAdm4gujrZ8AJkEiRcXC FxYSHmyUkHAlJgxQPZ3yO2xtASeHqDNAwu8jioKRUNR18zfBoVgbWyCtU2kMtkZCl3yp kIGA== MIME-Version: 1.0 Received: by 10.216.241.200 with SMTP id g50mr1386600wer.79.1344596395241; Fri, 10 Aug 2012 03:59:55 -0700 (PDT) Received: by 10.217.6.13 with HTTP; Fri, 10 Aug 2012 03:59:55 -0700 (PDT) In-Reply-To: <50249223.66bd440a.2b35.0070SMTPIN_ADDED@mx.google.com> References: <50249223.66bd440a.2b35.0070SMTPIN_ADDED@mx.google.com> Date: Fri, 10 Aug 2012 16:29:55 +0530 Message-ID: Subject: Re: namenode instantiation error From: anand sharma To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=e0cb4e43cf89812fd004c6e73e35 X-Virus-Checked: Checked by ClamAV on apache.org --e0cb4e43cf89812fd004c6e73e35 Content-Type: text/plain; charset=ISO-8859-1 Yea Vinay you are write i am formatting it from root and running it from hive user beacause when i try to format namenode from hive it says.. [hive@localhost ~]$ hadoop namenode -format 12/08/10 21:42:13 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = localhost.localdomain/127.0.0.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.20.2-cdh3u4 STARTUP_MSG: build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May 7 14:01:59 PDT 2012 ************************************************************/ Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or N) y Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/name 12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 yea i think that i may need to install ssh in order to get it up and running. On Fri, Aug 10, 2012 at 10:14 AM, Vinayakumar B wrote: > Hi Anand,**** > > Its clearly telling namenode not able to access the lock file inside name > dir.**** > > ** ** > > * /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission > denied)* > > * * > > Did you format the namenode using one user and starting namenode in > another user..?**** > > ** ** > > Try formatting and starting from same user console.**** > > ** ** > > *From:* anand sharma [mailto:anand2sharma@gmail.com] > *Sent:* Friday, August 10, 2012 9:37 AM > *To:* user@hadoop.apache.org > *Subject:* Re: namenode instantiation error**** > > ** ** > > yes Owen i did.**** > > On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan wrote:**** > > have you tried hadoop namenode -format?**** > > 2012/8/9 anand sharma **** > > yea Tariq !1 its a fresh installation i m doing it for the first time, > hope someone will know the error code and the reason of error.**** > > ** ** > > On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq wrote: > **** > > Hi Anand, > > Have you tried any other Hadoop distribution or version also??In > that case first remove the older one and start fresh. > > Regards, > Mohammad Tariq**** > > > > On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq wrote: > > Hello Rahul, > > > > That's great. That's the best way to learn(I am doing the same :) > > ). Since the installation part is over, I would suggest to get > > yourself familiar with Hdfs and MapReduce first. Try to do basic > > filesystem operations using the Hdfs API and run the wordcount > > program, if you haven't done it yet. Then move ahead. > > > > Regards, > > Mohammad Tariq > > > > > > On Thu, Aug 9, 2012 at 5:20 PM, rahul p > wrote: > >> Hi Tariq, > >> > >> I am also new to Hadoop trying to learn my self can anyone help me on > the > >> same. > >> i have installed CDH3. > >> > >> > >> > >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq > wrote: > >>> > >>> Hello Anand, > >>> > >>> Is there any specific reason behind not using ssh?? > >>> > >>> Regards, > >>> Mohammad Tariq > >>> > >>> > >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma > >>> wrote: > >>> > Hi, i am just learning the Hadoop and i am setting the development > >>> > environment with CDH3 pseudo distributed mode without any ssh > >>> > cofiguration > >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try > >>> > and > >>> > run namenode this is the error it logs... > >>> > > >>> > [hive@localhost ~]$ hadoop namenode > >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: > >>> > /************************************************************ > >>> > STARTUP_MSG: Starting NameNode > >>> > STARTUP_MSG: host = localhost.localdomain/127.0.0.1 > >>> > STARTUP_MSG: args = [] > >>> > STARTUP_MSG: version = 0.20.2-cdh3u4 > >>> > STARTUP_MSG: build = > >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 > >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on > Mon > >>> > May > >>> > 7 14:01:59 PDT 2012 > >>> > ************************************************************/ > >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with > >>> > processName=NameNode, sessionId=null > >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing > >>> > NameNodeMeterics using context > >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > >>> > 12/08/09 20:56:57 INFO util.GSet: VM type = 64-bit > >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB > >>> > 12/08/09 20:56:57 INFO util.GSet: capacity = 2^21 = 2097152 > entries > >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152 > >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive > (auth:SIMPLE) > >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup > >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > isPermissionEnabled=false > >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > >>> > dfs.block.invalidate.limit=1000 > >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > isAccessTokenEnabled=false > >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) > >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing > >>> > FSNamesystemMetrics using context > >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem > >>> > initialization > >>> > failed. > >>> > java.io.FileNotFoundException: > >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission > >>> > denied) > >>> > at java.io.RandomAccessFile.open(Native Method) > >>> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) > >>> > at > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) > >>> > at > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) > >>> > 12/08/09 20:56:57 ERROR namenode.NameNode: > >>> > java.io.FileNotFoundException: > >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission > >>> > denied) > >>> > at java.io.RandomAccessFile.open(Native Method) > >>> > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) > >>> > at > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) > >>> > at > >>> > > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) > >>> > at > >>> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) > >>> > > >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: > >>> > /************************************************************ > >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/ > 127.0.0.1 > >>> > ************************************************************/ > >>> > > >>> > > >> > >>**** > > ** ** > > ** ** > > ** ** > --e0cb4e43cf89812fd004c6e73e35 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Yea Vinay you are write i am formatting it from root and running it from hi= ve user beacause when i try to format namenode from hive it says..

=
[hive@localhost ~]$ hadoop namenode -format
12/08= /10 21:42:13 INFO namenode.NameNode: STARTUP_MSG:=A0
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: =A0 host =3D localh= ost.localdomain/127.0.0.1
STARTU= P_MSG: =A0 args =3D [-format]
STARTUP_MSG: =A0 version =3D 0.20.2-cdh3u4
STARTUP_MSG: =A0 = build =3D file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r 214dd731e= 3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May =A07= 14:01:59 PDT 2012
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y o= r N) y
Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/na= me
12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG:=A0
/= ************************************************************
SHUT= DOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1

yea i think that i may need to install ssh in order to = get it up and running.

On Fri, Aug 10, = 2012 at 10:14 AM, Vinayakumar B <vinayakumar.b@huawei.com> wrote:

= Hi Anand,

Its clearly telling namen= ode not able to access the lock file inside name dir.<= /p>

=A0

/var= /lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)

=A0

Did you format the namenode using one user and starting namenode in another= user..?

=A0

Try formatting and starti= ng from same user console.

<= span style=3D"font-size:11.0pt;font-family:"Calibri","sans-s= erif";color:#1f497d">=A0

From: anand sharma [mailto:anand2sharma@gmail.com]
Sent: Friday, August 10, 2012 9:37 AM
To: user@hadoop.apache.org
<= b>Subject: Re: namenode instantiation error

=A0

yes Owen i did.

On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <sudyduan@gmail.com> wrote:

have you tried hadoop= namenode -format?

2012/8/9 an= and sharma <= anand2sharma@gmail.com>

yea=A0 Tariq=A0!1 its a fresh=A0installation=A0i m d= oing it for the first time, hope someone will know the error code and the r= eason of error.

=A0

On Thu, Aug 9, 2012 at 5:3= 5 PM, Mohammad Tariq <dontariq@gmail.com> wrote:

Hi Anand,

=A0 =A0 =A0 Have you tried any other Hadoop distribution or version als= o??In
that case first remove the older one and start fresh.

Regar= ds,
=A0 =A0 Mohammad Tariq



On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
&g= t; Hello Rahul,
>
> =A0 =A0That's great. That's the bes= t way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
>= yourself familiar with Hdfs and MapReduce first. Try to do basic
> f= ilesystem operations using the Hdfs API and run the wordcount
> progr= am, if you haven't done it yet. Then move ahead.
>
> Regards,
> =A0 =A0 Mohammad Tariq
>
>
>= ; On Thu, Aug 9, 2012 at 5:20 PM, rahul p <rahulpoolanchalil@gmail.com> wro= te:
>> Hi Tariq,
>>
>> I am also new to Hadoop trying t= o learn my self can anyone help me on the
>> same.
>> i h= ave installed CDH3.
>>
>>
>>
>> On Thu,= Aug 9, 2012 at 6:21 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
>>>
>>> Hello Anand,
>>>
>>> = =A0 =A0 Is there any specific reason behind not using ssh??
>>>=
>>> Regards,
>>> =A0 =A0 Mohammad Tariq
>>= ;>
>>>
>>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma &= lt;anand2sharma= @gmail.com>
>>> wrote:
>>> > Hi, i am jus= t learning the Hadoop and i am setting the development
>>> > environment with CDH3 pseudo distributed mode without any= ssh
>>> > cofiguration
>>> > in CentOS 6.2 .= i can run the sample programs as usual but when i try
>>> >= and
>>> > run namenode this is the error it logs...
>>>= >
>>> > [hive@localhost ~]$ hadoop namenode
>>&= gt; > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>>= > > /************************************************************
>>> > STARTUP_MSG: Starting NameNode
>>> > START= UP_MSG: =A0 host =3D localhost.localdomain/127.0.0.1
>>> > STARTUP_MSG: =A0 args =3D= []
>>> > STARTUP_MSG: =A0 version =3D 0.20.2-cdh3u4
>>> > STARTUP_MSG: =A0 build =3D
>>> > file:///d= ata/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>>> > -r 214dd73= 1e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
>= ;>> > May
>>> > 7 14:01:59 PDT 2012
>>> > ****************= ********************************************/
>>> > 12/08/09= 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
>>>= ; > processName=3DNameNode, sessionId=3Dnull
>>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializ= ing
>>> > NameNodeMeterics using context
>>> >= ; object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>>>= > 12/08/09 20:56:57 INFO util.GSet: VM type =A0 =A0 =A0 =3D 64-bit
>>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory =3D 17.77= 875 MB
>>> > 12/08/09 20:56:57 INFO util.GSet: capacity =A0 = =A0 =A0=3D 2^21 =3D 2097152 entries
>>> > 12/08/09 20:56:57 = INFO util.GSet: recommended=3D2097152, actual=3D2097152
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=3Dh= ive (auth:SIMPLE)
>>> > 12/08/09 20:56:57 INFO namenode.FSNa= mesystem: supergroup=3Dsupergroup
>>> > 12/08/09 20:56:57 IN= FO namenode.FSNamesystem: isPermissionEnabled=3Dfalse
>>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>>= > > dfs.block.invalidate.limit=3D1000
>>> > 12/08/09 2= 0:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=3Dfalse
>>= ;> > accessKeyUpdateInterval=3D0 min(s), accessTokenLifetime=3D0 min(= s)
>>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initi= alizing
>>> > FSNamesystemMetrics using context
>>&= gt; > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>&= gt;> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem >>> > initialization
>>> > failed.
>>&g= t; > java.io.FileNotFoundException:
>>> > /var/lib/hadoop= -0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > de= nied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>&g= t;> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:= 216)
>>> > at
>>> >
>>> > org.= apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.j= ava:614)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)>>> > at
>>> >
>>> > org.apache.= hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.j= ava:449)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)<= br>>>> > at
>>> >
>>> > org.apach= e.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)=
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)>>> > at
>>> >
>>> > org.apache= .hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:33= 5)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>&g= t;> > at
>>> > org.apache.hadoop.hdfs.server.namenode.= NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
&= gt;>> > at
>>> > org.apache.hadoop.hdfs.server.name= node.NameNode.main(NameNode.java:1339)
>>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>>>= ; > java.io.FileNotFoundException:
>>> > /var/lib/hadoop-= 0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>>> > den= ied)
>>> > at java.io.RandomAccessFile.open(Native Method)
>&g= t;> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:= 216)
>>> > at
>>> >
>>> > org.= apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.j= ava:614)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)>>> > at
>>> >
>>> > org.apache.= hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.j= ava:449)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)<= br>>>> > at
>>> >
>>> > org.apach= e.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)=
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)>>> > at
>>> >
>>> > org.apache= .hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:33= 5)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>&g= t;> > at
>>> > org.apache.hadoop.hdfs.server.namenode.= NameNode.<init>(NameNode.java:467)
>>> > at
>>> >
>>> > org.apache.h= adoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
&= gt;>> > at
>>> > org.apache.hadoop.hdfs.server.name= node.NameNode.main(NameNode.java:1339)
>>> >
>>> > 12/08/09 20:56:57 INFO namenode.Name= Node: SHUTDOWN_MSG:
>>> > /*********************************= ***************************
>>> > SHUTDOWN_MSG: Shutting dow= n NameNode at localhost.localdomain/127.0.0.1
>>> > *********************************************************= ***/
>>> >
>>> >
>>
>>

=A0

=A0

=A0

<= /div>
--e0cb4e43cf89812fd004c6e73e35--