hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anand sharma <anand2sha...@gmail.com>
Subject Re: namenode instantiation error
Date Fri, 10 Aug 2012 04:06:02 GMT
its false... Abhishek

 <property>
     <name>dfs.permissions</name>
     <value>false</value>
  </property>

<property>
     <!-- specify this so that running 'hadoop namenode -format' formats
the right dir -->
     <name>dfs.name.dir</name>
     <value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
  </property>


On Thu, Aug 9, 2012 at 6:29 PM, Abhishek <abhishek.dodda1@gmail.com> wrote:

> Hi Anand,
>
> What are the permissions, on dfs.name.dir directory in hdfs-site.xml
>
> Regards
> Abhishek
>
>
> Sent from my iPhone
>
> On Aug 9, 2012, at 8:41 AM, anand sharma <anand2sharma@gmail.com> wrote:
>
> yea  Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq <dontariq@gmail.com> wrote:
>
>> Hi Anand,
>>
>>       Have you tried any other Hadoop distribution or version also??In
>> that case first remove the older one and start fresh.
>>
>> Regards,
>>     Mohammad Tariq
>>
>>
>> On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq <dontariq@gmail.com>
>> wrote:
>> > Hello Rahul,
>> >
>> >    That's great. That's the best way to learn(I am doing the same :)
>> > ). Since the installation part is over, I would suggest to get
>> > yourself familiar with Hdfs and MapReduce first. Try to do basic
>> > filesystem operations using the Hdfs API and run the wordcount
>> > program, if you haven't done it yet. Then move ahead.
>> >
>> > Regards,
>> >     Mohammad Tariq
>> >
>> >
>> > On Thu, Aug 9, 2012 at 5:20 PM, rahul p <rahulpoolanchalil@gmail.com>
>> wrote:
>> >> Hi Tariq,
>> >>
>> >> I am also new to Hadoop trying to learn my self can anyone help me on
>> the
>> >> same.
>> >> i have installed CDH3.
>> >>
>> >>
>> >>
>> >> On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq <dontariq@gmail.com>
>> wrote:
>> >>>
>> >>> Hello Anand,
>> >>>
>> >>>     Is there any specific reason behind not using ssh??
>> >>>
>> >>> Regards,
>> >>>     Mohammad Tariq
>> >>>
>> >>>
>> >>> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma <anand2sharma@gmail.com>
>> >>> wrote:
>> >>> > Hi, i am just learning the Hadoop and i am setting the development
>> >>> > environment with CDH3 pseudo distributed mode without any ssh
>> >>> > cofiguration
>> >>> > in CentOS 6.2 . i can run the sample programs as usual but when
i
>> try
>> >>> > and
>> >>> > run namenode this is the error it logs...
>> >>> >
>> >>> > [hive@localhost ~]$ hadoop namenode
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG:
>> >>> > /************************************************************
>> >>> > STARTUP_MSG: Starting NameNode
>> >>> > STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
>> >>> > STARTUP_MSG:   args = []
>> >>> > STARTUP_MSG:   version = 0.20.2-cdh3u4
>> >>> > STARTUP_MSG:   build =
>> >>> > file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4
>> >>> > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root'
on
>> Mon
>> >>> > May
>> >>> > 7 14:01:59 PDT 2012
>> >>> > ************************************************************/
>> >>> > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics
with
>> >>> > processName=NameNode, sessionId=null
>> >>> > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing
>> >>> > NameNodeMeterics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 INFO util.GSet: VM type       = 64-bit
>> >>> > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB
>> >>> > 12/08/09 20:56:57 INFO util.GSet: capacity      = 2^21 = 2097152
>> entries
>> >>> > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152,
>> actual=2097152
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive
>> (auth:SIMPLE)
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isPermissionEnabled=false
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> >>> > dfs.block.invalidate.limit=1000
>> >>> > 12/08/09 20:56:57 INFO namenode.FSNamesystem:
>> isAccessTokenEnabled=false
>> >>> > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>> >>> > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing
>> >>> > FSNamesystemMetrics using context
>> >>> > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext
>> >>> > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem
>> >>> > initialization
>> >>> > failed.
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> > 12/08/09 20:56:57 ERROR namenode.NameNode:
>> >>> > java.io.FileNotFoundException:
>> >>> > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission
>> >>> > denied)
>> >>> > at java.io.RandomAccessFile.open(Native Method)
>> >>> > at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:335)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:467)
>> >>> > at
>> >>> >
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330)
>> >>> > at
>> >>> >
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339)
>> >>> >
>> >>> > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG:
>> >>> > /************************************************************
>> >>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>> 127.0.0.1
>> >>> > ************************************************************/
>> >>> >
>> >>> >
>> >>
>> >>
>>
>
>

Mime
View raw message