hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anand sharma <anand2sha...@gmail.com>
Subject Re: namenode instantiation error
Date Sat, 11 Aug 2012 12:43:16 GMT
Thannks  Tariq  , i already have.

On Fri, Aug 10, 2012 at 7:51 PM, Mohammad Tariq <dontariq@gmail.com> wrote:

> Hello Anand,
>
>    Sorry for being unresponsive. You have anyways got proper comments
> from the expert. I would just like to add one thing here. Since you
> want to reduce the complexity, I would suggest you to configure ssh.
> It's a one time pain but saves lot of time and efforts. Otherwise you
> have to go to each node even for the smallest thing. ssh configuration
> is quite straight forward and if you need some help on that you can go
> here :
>
> http://cloudfront.blogspot.in/2012/07/how-to-setup-and-configure-ssh-on-ubuntu.html
>
> Regards,
>     Mohammad Tariq
>
>
> On Fri, Aug 10, 2012 at 5:34 PM, Harsh J <harsh@cloudera.com> wrote:
> > You do not need SSH generally. See
> > http://wiki.apache.org/hadoop/FAQ#Does_Hadoop_require_SSH.3F
> >
> > 1. Your original issue is that you are starting the NameNode as the
> > completely wrong user. Start it as the "hdfs" user, in a packaged
> > environment. Run "sudo -u hdfs hadoop namenode" to start it in
> > foreground, or simply run "sudo service hadoop-0.20-namenode start" to
> > start it in the background. This will fix it up for you.
> >
> > 2. Your format was aborted cause in 0.20.x/1.x, the input required was
> > case-sensitive, while in 2.x onwards the input is non-case-sensitive.
> > So if you typed "Y" instead of "y", it would have succeeded.
> >
> > HTH!
> >
> > On Fri, Aug 10, 2012 at 4:35 PM, anand sharma <anand2sharma@gmail.com>
> wrote:
> >> And are permission for that file which is causing problem..
> >>
> >> [root@localhost hive]# ls -l
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >> -rwxrwxrwx. 1 hdfs hdfs 0 Aug 10 21:23
> >> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock
> >>
> >>
> >>
> >>
> >>
> >> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com>
> wrote:
> >>>
> >>> Hi, i am just learning the Hadoop and i am setting the development
> >>> environment with CDH3 pseudo distributed mode without any ssh
> cofiguration
> >>> in CentOS 6.2 . i can run the sample programs as usual but when i try
> and
> >>> run namenode this is the error it logs...
> >>>
> >>> [hive@localhost ~]$ hadoop namenode
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
> >>> /************************************************************
> >>> STARTUP_MSG: Starting NameNode
> >>> STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
> >>> STARTUP_MSG:   args = []
> >>> STARTUP_MSG:   version = 0.20.2-cdh3u4
> >>> STARTUP_MSG:   build =
> >>> file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 -r
> >>> 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon
> May  7
> >>> 14:01:59 PDT 2012
> >>> ************************************************************/
> >>> 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> >>> processName=NameNode, sessionId=null
> >>> 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
> >>> NameNodeMeterics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
> >>> 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
> >>> 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
> entries
> >>> 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
> (auth:SIMPLE)
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> >>> dfs.block.invalidate.limit=1000
> >>> 12/08/09 20:56:57 INFO namenode.FSNamesystem:
> isAccessTokenEnabled=false
> >>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
> >>> 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
> >>> FSNamesystemMetrics using context
> >>> object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
> >>> 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
> initialization
> >>> failed.
> >>> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>> 12/08/09 20:56:57 ERROR namenode.NameNode:
> java.io.FileNotFoundException:
> >>> /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
> denied)
> >>> at java.io.RandomAccessFile.open(Native Method)
> >>> at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
> >>> at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
> >>>
> >>> 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
> >>> /************************************************************
> >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
> 127.0.0.1
> >>> ************************************************************/
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Harsh J
>

Mime
View raw message