Return-Path: Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: (qmail 96049 invoked from network); 24 Feb 2011 19:08:42 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 24 Feb 2011 19:08:42 -0000 Received: (qmail 81725 invoked by uid 500); 24 Feb 2011 19:08:42 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 81277 invoked by uid 500); 24 Feb 2011 19:08:39 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 81258 invoked by uid 99); 24 Feb 2011 19:08:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 24 Feb 2011 19:08:38 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of iphulari@gmail.com designates 209.85.216.48 as permitted sender) Received: from [209.85.216.48] (HELO mail-qw0-f48.google.com) (209.85.216.48) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 24 Feb 2011 19:08:32 +0000 Received: by qwd6 with SMTP id 6so838831qwd.35 for ; Thu, 24 Feb 2011 11:08:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=Dt4L+ynhUtw1TQfjG/zT9CUXaboReRNF15bQFvS8h74=; b=rIle3RohotEJ87GJsNC5rlrzZFmZsgtomg+yNh2tYdId95VqdoPSgLWxbqHRQLDlSR +NBnaixv4WTYEqwWEKPolfFkE3HKOpC2lN3nyslHlqrVZE4UtZSE2ECsdwjIxvt7W3Eo AADibZbK77wvjTV2o4gLwoJI9JwfeO0fr90v8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=DY3IqYfo0dyq74nXom61XL28kn8R8Lgy3i6be0ZK2MDVty4ZstY+zKJFBRzovnHRCr rRGSkXk1xpfup9EuqCzmV7J142OxHmXVESAd92ot/jWdFAwAKWfNbNbuqHjDuE5K+PMC TdMC6VqhkTDni5Xyh16gZNyJAKB2tdcHiIXk4= MIME-Version: 1.0 Received: by 10.224.61.10 with SMTP id r10mr1153188qah.314.1298574490995; Thu, 24 Feb 2011 11:08:10 -0800 (PST) Received: by 10.224.45.208 with HTTP; Thu, 24 Feb 2011 11:08:10 -0800 (PST) In-Reply-To: References: Date: Thu, 24 Feb 2011 11:08:10 -0800 Message-ID: Subject: Re: namenode don't start From: "Ravi ." To: hdfs-user@hadoop.apache.org Cc: Khaled Ben Bahri Content-Type: multipart/alternative; boundary=0015175cb2e2400b7a049d0befd3 --0015175cb2e2400b7a049d0befd3 Content-Type: text/plain; charset=ISO-8859-1 You forgot to format HDFS file system before starting name node. Following line in your error log explains this - 11/02/24 10:43:48 INFO common.Storage: Storage directory /usr/local/hadoop-0.20.2/namespace does not exist. Please format HDFS using following command : $HADOOP_HOME/bin/hadoop namenode -format On Thu, Feb 24, 2011 at 10:53 AM, Khaled Ben Bahri wrote: > hello to all > > i'm new user of hadoop hdfs > i configure it like indicated in site on 2 virtual machines > when i want to start it, datanode start but the namenode fail to start with > the secondary namenode > > i have this error message > i don't know if the problem is of the network because the ip adress of > master is different of what is shown in the third line > > > > > 11/02/24 10:43:47 INFO namenode.NameNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host = ubuntu/127.0.1.1 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 0.20.2 > STARTUP_MSG: build = > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r > 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010 > ************************************************************/ > 11/02/24 10:43:48 INFO metrics.RpcMetrics: Initializing RPC Metrics with > hostName=NameNode, port=8020 > 11/02/24 10:43:48 INFO namenode.NameNode: Namenode up at: ubuntu.local/ > 157.159.103.83:8020 > 11/02/24 10:43:48 INFO jvm.JvmMetrics: Initializing JVM Metrics with > processName=NameNode, sessionId=null > 11/02/24 10:43:48 INFO metrics.NameNodeMetrics: Initializing > NameNodeMeterics using context > object:org.apache.hadoop.metrics.spi.NullContext > 11/02/24 10:43:48 INFO namenode.FSNamesystem: > fsOwner=vadmin,vadmin,adm,dialout,cdrom,plugdev,lpadmin,sambashare,admin > 11/02/24 10:43:48 INFO namenode.FSNamesystem: supergroup=supergroup > 11/02/24 10:43:48 INFO namenode.FSNamesystem: isPermissionEnabled=true > 11/02/24 10:43:48 INFO metrics.FSNamesystemMetrics: Initializing > FSNamesystemMetrics using context > object:org.apache.hadoop.metrics.spi.NullContext > 11/02/24 10:43:48 INFO namenode.FSNamesystem: Registered > FSNamesystemStatusMBean > 11/02/24 10:43:48 INFO common.Storage: Storage directory > /usr/local/hadoop-0.20.2/namespace does not exist. > 11/02/24 10:43:48 ERROR namenode.FSNamesystem: FSNamesystem initialization > failed. > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: > Directory /usr/local/hadoop-0.20.2/namespace is in an inconsistent state: > storage directory does not exist or is not accessible. > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:292) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) > 11/02/24 10:43:48 INFO ipc.Server: Stopping server on 8020 > 11/02/24 10:43:48 ERROR namenode.NameNode: > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory > /usr/local/hadoop-0.20.2/namespace is in an inconsistent state: storage > directory does not exist or is not accessible. > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:290) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:311) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:292) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:279) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:956) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965) > > 11/02/24 10:43:48 INFO namenode.NameNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1 > ************************************************************/ > > > thanks for your help > best regards > khaled > > --0015175cb2e2400b7a049d0befd3 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

You forgot to format HDFS file system before starting name node. Following= line in your error log explains this -

11/02= /24 10:43:48 INFO common.Storage: Storage directory /usr/local/hadoop-0.20.= 2/namespace does not exist.
=A0
Please format HDFS using following command :
<= br>
$HADOOP_HOME/bin/hadoop namenode -format


On Thu, Feb 24, 2011 at 10:53 A= M, Khaled Ben Bahri <khaled-bbk@hotmail.com> wrote:
hello to all

i'm new user of hadoop hdfs
i configure it like = indicated in site on 2 virtual machines
when i want to start it, datanod= e start but the namenode fail to start with the secondary namenode

i have this error message
i don't know if the problem is of the = network because the ip adress of master is different of what is shown in th= e third line




11/02/24 10:43:47 INFO namenode.NameNode: S= TARTUP_MSG:
/************************************************************
STARTUP_MS= G: Starting NameNode
STARTUP_MSG:=A0=A0 host =3D ubuntu/127.0.1.1
STARTUP_MSG:=A0=A0 args =3D= []
STARTUP_MSG:=A0=A0 version =3D 0.20.2
STARTUP_MSG:=A0=A0 build =3D https://svn.apache.org/= repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by = 9;chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
11/02/24 1= 0:43:48 INFO metrics.RpcMetrics: Initializing RPC Metrics with hostName=3DN= ameNode, port=3D8020
11/02/24 10:43:48 INFO namenode.NameNode: Namenode = up at: ubuntu.local/157.159.103.83:8020
11/02/24 10:43:48 INFO jvm.JvmMetrics: Initializing JVM Metrics with proces= sName=3DNameNode, sessionId=3Dnull
11/02/24 10:43:48 INFO metrics.NameNo= deMetrics: Initializing NameNodeMeterics using context object:org.apache.ha= doop.metrics.spi.NullContext
11/02/24 10:43:48 INFO namenode.FSNamesystem: fsOwner=3Dvadmin,vadmin,adm,d= ialout,cdrom,plugdev,lpadmin,sambashare,admin
11/02/24 10:43:48 INFO nam= enode.FSNamesystem: supergroup=3Dsupergroup
11/02/24 10:43:48 INFO namen= ode.FSNamesystem: isPermissionEnabled=3Dtrue
11/02/24 10:43:48 INFO metrics.FSNamesystemMetrics: Initializing FSNamesyst= emMetrics using context object:org.apache.hadoop.metrics.spi.NullContext11/02/24 10:43:48 INFO namenode.FSNamesystem: Registered FSNamesystemStatu= sMBean
11/02/24 10:43:48 INFO common.Storage: Storage directory /usr/local/hadoop-= 0.20.2/namespace does not exist.
11/02/24 10:43:48 ERROR namenode.FSName= system: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.serve= r.common.InconsistentFSStateException: Directory /usr/local/hadoop-0.20.2/n= amespace is in an inconsistent state: storage directory does not exist or i= s not accessible.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.rec= overTransitionRead(FSImage.java:290)
=A0=A0=A0=A0=A0=A0=A0 at org.apache= .hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesys= tem.initialize(FSNamesystem.java:311)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesyste= m.<init>(FSNamesystem.java:292)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
=A0= =A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.<i= nit>(NameNode.java:279)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.cr= eateNameNode(NameNode.java:956)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hado= op.hdfs.server.namenode.NameNode.main(NameNode.java:965)
11/02/24 10:43:= 48 INFO ipc.Server: Stopping server on 8020
11/02/24 10:43:48 ERROR namenode.NameNode: org.apache.hadoop.hdfs.server.co= mmon.InconsistentFSStateException: Directory /usr/local/hadoop-0.20.2/names= pace is in an inconsistent state: storage directory does not exist or is no= t accessible.
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.rec= overTransitionRead(FSImage.java:290)
=A0=A0=A0=A0=A0=A0=A0 at org.apache= .hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesys= tem.initialize(FSNamesystem.java:311)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesyste= m.<init>(FSNamesystem.java:292)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:201)
=A0= =A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.<i= nit>(NameNode.java:279)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.cr= eateNameNode(NameNode.java:956)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hado= op.hdfs.server.namenode.NameNode.main(NameNode.java:965)

11/02/24 10= :43:48 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_M= SG: Shutting down NameNode at ubuntu/127.0.1.1
*************************************************= ***********/


thanks for your help
best regards
khaled

<= /div>

--0015175cb2e2400b7a049d0befd3--