Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 179B7D12D for ; Thu, 9 Aug 2012 10:20:54 +0000 (UTC) Received: (qmail 76245 invoked by uid 500); 9 Aug 2012 10:20:49 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 75975 invoked by uid 500); 9 Aug 2012 10:20:48 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 75962 invoked by uid 99); 9 Aug 2012 10:20:48 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 10:20:48 +0000 X-ASF-Spam-Status: No, hits=-0.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FSL_RCVD_USER,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of nitinpawar432@gmail.com designates 209.85.217.176 as permitted sender) Received: from [209.85.217.176] (HELO mail-lb0-f176.google.com) (209.85.217.176) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Aug 2012 10:20:42 +0000 Received: by lboi15 with SMTP id i15so197373lbo.35 for ; Thu, 09 Aug 2012 03:20:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=eUcfxLrLPTC0i+r24UlOa/gEH9lRmQiwMoOPBtaoJIA=; b=oog8CYVnsFwpb7Ed7+xKJILGw/1QKye7R96Jz8pHOYjStS+0Skf0OpCc1xuyG/CXaw rSQjUDbJw8bqJFekAbO965EFjQWuMyAeYPsbFvL+TsP3lvBDh44nJthNe2uR9fvUrqeP KEf/o1khk2nKW2B+y4hyRxnXtjvobfivXPQ2EBf+p6ey5pF0U9anuxN3YAZHKiKxcXt5 OvwQvbGZz4ggUjIuCo9F/cKIS82ZBzw4iz8EO2MZgv52sLlTVpgeUX5Roske/iqgkd9j ksHGEtu5QynsuXMvWP/kko4Padqeh6NJsk6C6TTWx/WlogFGYdK49C5jYJ8j+7t3LtZh R9vA== MIME-Version: 1.0 Received: by 10.152.111.71 with SMTP id ig7mr21370210lab.28.1344507620650; Thu, 09 Aug 2012 03:20:20 -0700 (PDT) Received: by 10.112.127.39 with HTTP; Thu, 9 Aug 2012 03:20:20 -0700 (PDT) In-Reply-To: References: Date: Thu, 9 Aug 2012 15:50:20 +0530 Message-ID: Subject: Re: namenode instantiation error From: Nitin Pawar To: user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org which user you are starting namenode? if you are not root, does the user have access to mentioned directory? On Thu, Aug 9, 2012 at 3:46 PM, anand sharma wrote: > Hi, i am just learning the Hadoop and i am setting the development > environment with CDH3 pseudo distributed mode without any ssh cofiguration > in CentOS 6.2 . i can run the sample programs as usual but when i try and > run namenode this is the error it logs... > > [hive@localhost ~]$ hadoop namenode > 12/08/09 20:56:57 INFO namenode.NameNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host = localhost.localdomain/127.0.0.1 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 0.20.2-cdh3u4 > STARTUP_MSG: build = file:///data/1/tmp/topdir/BUILD/hadoop-0.20.2-cdh3u4 > -r 214dd731e3bdb687cb55988d3f47dd9e248c5690; compiled by 'root' on Mon May > 7 14:01:59 PDT 2012 > ************************************************************/ > 12/08/09 20:56:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with > processName=NameNode, sessionId=null > 12/08/09 20:56:57 INFO metrics.NameNodeMetrics: Initializing > NameNodeMeterics using context > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > 12/08/09 20:56:57 INFO util.GSet: VM type = 64-bit > 12/08/09 20:56:57 INFO util.GSet: 2% max memory = 17.77875 MB > 12/08/09 20:56:57 INFO util.GSet: capacity = 2^21 = 2097152 entries > 12/08/09 20:56:57 INFO util.GSet: recommended=2097152, actual=2097152 > 12/08/09 20:56:57 INFO namenode.FSNamesystem: fsOwner=hive (auth:SIMPLE) > 12/08/09 20:56:57 INFO namenode.FSNamesystem: supergroup=supergroup > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isPermissionEnabled=false > 12/08/09 20:56:57 INFO namenode.FSNamesystem: > dfs.block.invalidate.limit=1000 > 12/08/09 20:56:57 INFO namenode.FSNamesystem: isAccessTokenEnabled=false > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) > 12/08/09 20:56:57 INFO metrics.FSNamesystemMetrics: Initializing > FSNamesystemMetrics using context > object:org.apache.hadoop.metrics.spi.NoEmitMetricsContext > 12/08/09 20:56:57 ERROR namenode.FSNamesystem: FSNamesystem initialization > failed. > java.io.FileNotFoundException: > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied) > at java.io.RandomAccessFile.open(Native Method) > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) > 12/08/09 20:56:57 ERROR namenode.NameNode: java.io.FileNotFoundException: > /var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied) > at java.io.RandomAccessFile.open(Native Method) > at java.io.RandomAccessFile.(RandomAccessFile.java:216) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.tryLock(Storage.java:614) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:591) > at > org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:449) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:372) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:335) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) > at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:467) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1330) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1339) > > 12/08/09 20:56:57 INFO namenode.NameNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 > ************************************************************/ > > -- Nitin Pawar