hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anand sharma <anand2sha...@gmail.com>
Subject Re: namenode instantiation error
Date Fri, 10 Aug 2012 10:59:55 GMT
Yea Vinay you are write i am formatting it from root and running it from
hive user beacause when i try to format namenode from hive it says..

[hive@localhost ~]$ hadoop namenode -format
12/08/10 21:42:13 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2-cdh3u4
STARTUP_MSG:   build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
-r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May
 7 14:01:59 PDT 2012
************************************************************/
Re-format filesystem in /var/lib/hadoop-0.20/cache/hadoop/dfs/name ? (Y or
N) y
Format aborted in /var/lib/hadoop-0.20/cache/hadoop/dfs/name
12/08/10 21:42:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1

yea i think that i may need to install ssh in order to get it up and
running.

On Fri, Aug 10, 2012 at 10:14 AM, Vinayakumar B <vinayakumar.b@huawei.com>wrote:

> Hi Anand,****
>
> Its clearly telling namenode not able to access the lock file inside name
> dir.****
>
> ** **
>
> * /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)*
>
> * *
>
> Did you format the namenode using one user and starting namenode in
> another user..?****
>
> ** **
>
> Try formatting and starting from same user console.****
>
> ** **
>
> *From:* anand sharma [mailto:anand2sharma@gmail.com]
> *Sent:* Friday, August 10, 2012 9:37 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: namenode instantiation error****
>
> ** **
>
> yes Owen i did.****
>
> On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan <sudyduan@gmail.com> wrote:****
>
> have you tried hadoop namenode -format?****
>
> 2012/8/9 anand sharma <anand2sharma@gmail.com>****
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.****
>
> ** **
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
> ****
>
> Hi Anand,
>
>       Have you tried any other Hadoop distribution or version also??In
> that case first remove the older one and start fresh.
>
> Regards,
>     Mohammad Tariq****
>
>
>
> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
> > Hello Rahul,
> >
> >    That's great. That's the best way to learn(I am doing the same :)
> > ). Since the installation part is over, I would suggest to get
> > yourself familiar with Hdfs and MapReduce first. Try to do basic
> > filesystem operations using the Hdfs API and run the wordcount
> > program, if you haven't done it yet. Then move ahead.
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <rahulpoolanchalil@gmail.com>
> wrote:
> >> Hi Tariq,
> >>
> >> I am also new to Hadoop trying to learn my self can anyone help me on
> the
> >> same.
> >> i have installed CDH3.
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <dontariq@gmail.com>
> wrote:
> >>>
> >>> Hello Anand,
> >>>
> >>>     Is there any specific reason behind not using ssh??
> >>>
> >>> Regards,
> >>>     Mohammad Tariq
> >>>
> >>>
> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com>
> >>> wrote:
> >>> > Hi, i am just learning the Hadoop and i am setting the development
> >>> > environment with CDH3 pseudo distributed mode without any ssh
> >>> > cofiguration
> >>> > in CentOS 6.2 . i can run the sample programs as usual but when i try
> >>> > and
> >>> > run namenode this is the error it logs...
> >>> >
> >>> > [hive@localhost ~]$ hadoop namenode
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> > /************************************************************
> >>> > STARTUP_MSG: Starting NameNode
> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> > STARTUP_MSG:   args = []
> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> > STARTUP_MSG:   build =
> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on
> Mon
> >>> > May
> >>> > 7 14:01:59 PDT 2012
> >>> > ************************************************************/
> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> > processName=NameNode, sessionId=null
> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> > NameNodeMeterics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isPermissionEnabled=false
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> > dfs.block.invalidate.limit=1000
> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> > FSNamesystemMetrics using context
> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> >>> > initialization
> >>> > failed.
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
> >>> > java.io.FileNotFoundException:
> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> >>> > denied)
> >>> > at java.io.RandomAccessFile.open(Native Method)
> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> > at
> >>> >
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> > at
> >>> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> >
> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> > /************************************************************
> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> > ************************************************************/
> >>> >
> >>> >
> >>
> >>****
>
> ** **
>
> ** **
>
> ** **
>

Mime
View raw message