Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9E13ED2B1 for ; Sat, 20 Oct 2012 13:16:35 +0000 (UTC) Received: (qmail 99492 invoked by uid 500); 20 Oct 2012 13:16:30 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 99140 invoked by uid 500); 20 Oct 2012 13:16:27 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 99096 invoked by uid 99); 20 Oct 2012 13:16:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Oct 2012 13:16:25 +0000 X-ASF-Spam-Status: No, hits=2.9 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [106.10.151.245] (HELO nm34-vm6.bullet.mail.sg3.yahoo.com) (106.10.151.245) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Oct 2012 13:16:19 +0000 Received: from [106.10.166.112] by nm34.bullet.mail.sg3.yahoo.com with NNFMP; 20 Oct 2012 13:15:56 -0000 Received: from [106.10.150.24] by tm1.bullet.mail.sg3.yahoo.com with NNFMP; 20 Oct 2012 13:15:56 -0000 Received: from [127.0.0.1] by omp1025.mail.sg3.yahoo.com with NNFMP; 20 Oct 2012 13:15:56 -0000 X-Yahoo-Newman-Property: ymail-3 X-Yahoo-Newman-Id: 607288.26487.bm@omp1025.mail.sg3.yahoo.com Received: (qmail 60616 invoked by uid 60001); 20 Oct 2012 13:15:56 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.co.in; s=s1024; t=1350738956; bh=y/YZ/6fai7V45R40NAibTJl6so+7gX0ZnxY6eONT7c4=; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=P2KzGCt/H5Qbn5Ia7oKmUmx6aiiLdJ62rFHxnEtq/PGHDnhUtpiNDLP68cB7TWbyriPTWFTpIJODJXUxgBO24+VkpV0pofR/xYvppp9PXUR9koKR/6QYg3/jHqelfIwOefX5+QSxwahvQX8pII83epAXMoCk7jPp19RnIahq+sg= DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.co.in; h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=3JNhNhZvTe0uuKxqD70dHVc1nal66gMR9VwZus25SDCxgirB892ayjttD6qN3/+RPIlbDC/e4B/TEJk9q+i9O1c22ER+WEXZ2jD/I5gBDxN8ow0epuMCkKDaBW95iBBb9XOXx2mZByqdZudXalrog+uEYcbkADm1S7yfd6il0Q4=; X-YMail-OSG: uchQvnEVM1nLBHB4EEmvD3A7bYQ8HK2_fuoylmAXzM9K5AJ 7JCc1m6JrAuJbGzhcqpFyu9YA5RQIE0qCoHOZpekb8MgNwqfTF0sXsnGVj8p Lu.XlOj08aX.CVNJ89lX4NIGglQiy207eAq1FAhTWExWmZi_Rkz7l1NwbbVy xKrqmMEQKA4WHv904hBCMM8XzXBtOljWMKWuGDJ1kXLHlyib3h7dcMBpEC_Y 1EnXy9gtO9uxtuD5lM7BnHx6EgJDzF3YsubC6BlyevFSymx.BzJaEc0Mo_t1 I6qPnYfPhfq8ax2BTREqBeWc73RNIrJ7fSkU7Coopr128gp.B5629imciMJB iJVeUBN57N8esiluW7ZTBNgzFjvYV23tEL2RlZOfH4ed0yiLoivu7buhERX4 .uE6qkz9bUaVtOamEnxegzS.Uy3u5bwbXynRSg88wZJqgi9vV_jh3f7Thkkz U7AX.KvaH4YA11eRfAwLHoxPrhGZ2TOdJOZTeVlt0m35VybpL8sulLFUuDeg 8P2K4.JyTNDaWs7uvb7WgpXvvp84GJ29d_fVRTmJgkb51FiHBZh0_fL_0smx riIKafKhHEHSWG9Ic54RGCFnM6CFsYQpoWyXI4L3PB9EUg2npuhJYqoygWun R9MMKMPj36qceozwnQKAw_K_mENHzX4ff7OUQP0BXCnOxK3EMizs- Received: from [108.208.141.149] by web194705.mail.sg3.yahoo.com via HTTP; Sat, 20 Oct 2012 21:15:56 SGT X-Rocket-MIMEInfo: 001.001,VGhhbmsgwqBZb3UgQmFsYWppLApJIGNoZWNrZWQgZ2V0aG9zdGJ5bmFtZShzay5yMjUyLjApIGl0IGdpdmVzIDEwLjAuMi4xNS4gVGhpcyBpcyBpcGFkZHJlc3MgaSBhbSBnZXR0aW5nIGluIGlmY29uZmlnIGFsc28uCnNzaCBzay5yMjUyLjAgaXMgc3NoaW5nIHRvIDEwLjAuMi4xNQpwaW5nIHNrLnIyNTIuMCBpcyBwaW5naW5nIHRvIDEwLjAuMi4xNS4KCkNhbiB5b3UgcGxlYXNlIGhlbHAgbWUgd2l0aCB0aGUgaXNzdWU_CgpSZWdhcmRzClN1bmRlZXAKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX18BMAEBAQE- X-Mailer: YahooMailWebService/0.8.123.450 References: <1350507871.11114.YahooMailNeo@web194702.mail.sg3.yahoo.com> <1350704031.98114.YahooMailNeo@web194703.mail.sg3.yahoo.com> Message-ID: <1350738956.51816.YahooMailNeo@web194705.mail.sg3.yahoo.com> Date: Sat, 20 Oct 2012 21:15:56 +0800 (SGT) From: Sundeep Kambhmapati Reply-To: Sundeep Kambhmapati Subject: Re: Namenode shutting down while creating cluster To: "user@hadoop.apache.org" Cc: "lists@balajin.net" In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="-1738200586-1674265269-1350738956=:51816" X-Virus-Checked: Checked by ClamAV on apache.org ---1738200586-1674265269-1350738956=:51816 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Thank =C2=A0You Balaji,=0AI checked gethostbyname(sk.r252.0) it gives 10.0.= 2.15. This is ipaddress i am getting in ifconfig also.=0Assh sk.r252.0 is s= shing to 10.0.2.15=0Aping sk.r252.0 is pinging to 10.0.2.15.=0A=0ACan you p= lease help me with the issue?=0A=0ARegards=0ASundeep=0A=0A=0A=0A___________= _____________________=0A From: Balaji Narayanan (=E0=AE=AA=E0=AE=BE=E0=AE= =B2=E0=AE=BE=E0=AE=9C=E0=AE=BF =E0=AE=A8=E0=AE=BE=E0=AE=B0=E0=AE=BE=E0=AE= =AF=E0=AE=A3=E0=AE=A9=E0=AF=8D) =0ATo: "user@hadoop.apac= he.org" ; Sundeep Kambhmapati =0ASent: Saturday, 20 October 2012 2:12 AM=0ASubject: Re: Namenode s= hutting down while creating cluster=0A =0A=0ASeems like an issue with resol= ution of sk.r252.0. Can you ensure that it resolves?=0A=0AOn Friday, Octobe= r 19, 2012, Sundeep Kambhmapati wrote:=0A=0AHi Users,=0A>My name node is s= hutting down soon after it is started.=C2=A0=0A>Here the log. Can some one = please help me.=0A>=0A>=0A>2012-10-19 23:20:42,143 INFO org.apache.hadoop.h= dfs.server.namenode.NameNode: STARTUP_MSG:=0A>/****************************= ********************************=0A>STARTUP_MSG: Starting NameNode=0A>START= UP_MSG: =C2=A0 host =3D sk.r252.0/10.0.2.15=0A>STARTUP_MSG: =C2=A0 args =3D= []=0A>STARTUP_MSG: =C2=A0 version =3D 0.20.2=0A>STARTUP_MSG: =C2=A0 build = =3D https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r = 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010=0A>**********= **************************************************/=0A>2012-10-19 23:20:42,= 732 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics= with hostName=3DNameNode, port=3D54310=0A>2012-10-19 23:20:42,741 INFO org= .apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.= 0.2.15:54310=0A>2012-10-19 23:20:42,745 INFO org.apache.hadoop.metrics.jvm.= JvmMetrics: Initializing JVM Metrics with processName=3DNameNode, sessionId= =3Dnull=0A>2012-10-19 23:20:42,747 INFO org.apache.hadoop.hdfs.server.namen= ode.metrics.NameNodeMetrics: Initializing NameNodeMeterics using context ob= ject:org.apache.hadoop.metrics.spi.NullContext=0A>2012-10-19 23:20:43,074 I= NFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=3Droot,roo= t,bin,daemon,sys,adm,disk,wheel=0A>2012-10-19 23:20:43,077 INFO org.apache.= hadoop.hdfs.server.namenode.FSNamesystem: supergroup=3Dsupergroup=0A>2012-1= 0-19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:= isPermissionEnabled=3Dtrue=0A>2012-10-19 23:20:43,231 INFO org.apache.hado= op.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializing FSNamesys= temMetrics using context object:org.apache.hadoop.metrics.spi.NullContext= =0A>2012-10-19 23:20:43,239 INFO org.apache.hadoop.hdfs.server.namenode.FSN= amesystem: Registered FSNamesystemStatusMBean=0A>2012-10-19 23:20:43,359 IN= FO org.apache.hadoop.hdfs.server.common.Storage: Number of files =3D 1=0A>2= 012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.common.Storage: N= umber of files under construction =3D 0=0A>2012-10-19 23:20:43,379 INFO org= .apache.hadoop.hdfs.server.common.Storage: Image file of size 94 loaded in = 0 seconds.=0A>2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.co= mmon.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 e= dits # 0 loaded in 0 seconds.=0A>2012-10-19 23:20:43,415 INFO org.apache.ha= doop.hdfs.server.common.Storage: Image file of size 94 saved in 0 seconds.= =0A>2012-10-19 23:20:43,612 INFO org.apache.hadoop.hdfs.server.namenode.FSN= amesystem: Finished loading FSImage in 758 msecs=0A>2012-10-19 23:20:43,615= INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of = blocks =3D 0=0A>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.server.= namenode.FSNamesystem: Number of invalid blocks =3D 0=0A>2012-10-19 23:20:4= 3,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of u= nder-replicated blocks =3D 0=0A>2012-10-19 23:20:43,615 INFO org.apache.had= oop.hdfs.server.namenode.FSNamesystem: Number of =C2=A0over-replicated bloc= ks =3D 0=0A>2012-10-19 23:20:43,615 INFO org.apache.hadoop.hdfs.StateChange= : STATE* Leaving safe mode after 0 secs.=0A>2012-10-19 23:20:43,616 INFO or= g.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0= datanodes=0A>2012-10-19 23:20:43,616 INFO org.apache.hadoop.hdfs.StateChan= ge: STATE* UnderReplicatedBlocks has 0 blocks=0A>2012-10-19 23:20:44,450 IN= FO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortba= y.log) via org.mortbay.log.Slf4jLog=0A>2012-10-19 23:20:44,711 INFO org.apa= che.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].g= etLocalPort() before open() is -1. Opening the listener on 50070=0A>2012-10= -19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: listener.getLocalP= ort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 5= 0070=0A>2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jet= ty bound to port 50070=0A>2012-10-19 23:20:44,715 INFO org.mortbay.log: jet= ty-6.1.14=0A>2012-10-19 23:20:47,021 INFO org.mortbay.log: Started SelectCh= annelConnector@0.0.0.0:50070=0A>2012-10-19 23:20:47,022 INFO org.apache.had= oop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070=0A>2012-= 10-19 23:20:47,022 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: We= b-server up at: 0.0.0.0:50070=0A>2012-10-19 23:20:47,067 INFO org.apache.ha= doop.ipc.Server: IPC Server listener on 54310: starting=0A>2012-10-19 23:20= :47,086 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 54310: s= tarting=0A>2012-10-19 23:20:47,089 INFO org.apache.hadoop.ipc.Server: IPC S= erver Responder: starting=0A>2012-10-19 23:20:47,106 INFO org.apache.hadoop= .ipc.Server: IPC Server handler 1 on 54310: starting=0A>2012-10-19 23:20:47= ,130 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 54310: star= ting=0A>2012-10-19 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Serv= er handler 3 on 54310: starting=0A>2012-10-19 23:20:47,165 INFO org.apache.= hadoop.ipc.Server: IPC Server handler 4 on 54310: starting=0A>2012-10-19 23= :20:47,183 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310= : starting=0A>2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IP= C Server handler 6 on 54310: starting=0A>2012-10-19 23:20:47,803 INFO org.a= pache.hadoop.ipc.Server: IPC Server handler 9 on 54310: starting=0A>2012-10= -19 23:20:47,804 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on= 54310: starting=0A>2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Serv= er: IPC Server handler 8 on 54310: starting=0A>2012-10-19 23:20:48,685WARN = org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thr= ead received InterruptedException.java.lang.InterruptedException: sleep int= errupted=0A>2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.name= node.FSNamesystem: Number of transactions: 0 Total time for transactions(ms= ): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes= (ms): 0=0A>2012-10-19 23:20:48,690 INFO org.apache.hadoop.hdfs.server.namen= ode.DecommissionManager: Interrupted Monitor=0A>java.lang.InterruptedExcept= ion: sleep interrupted=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.Thread.s= leep(Native Method)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdf= s.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:= 65)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.Thread.run(Thread.java:636)= =0A>2012-10-19 23:20:48,771 INFO org.apache.hadoop.ipc.Server: Stopping ser= ver on 54310=0A>2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: = IPC Server handler 0 on 54310: exiting=0A>2012-10-19 23:20:48,780 INFO org.= apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exiting=0A>2012-10= -19 23:20:48,781 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on= 54310: exiting=0A>2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Serve= r: IPC Server handler 3 on 54310: exiting=0A>2012-10-19 23:20:48,783 INFO o= rg.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: exiting=0A>2012= -10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5= on 54310: exiting=0A>2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.Se= rver: IPC Server handler 6 on 54310: exiting=0A>2012-10-19 23:20:48,785 INF= O org.apache.hadoop.ipc.Server: IPC Server handler 7 on 54310: exiting=0A>2= 012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handle= r 8 on 54310: exiting=0A>2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc= .Server: IPC Server handler 9 on 54310: exiting=0A>2012-10-19 23:20:48,786 = INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 54310=0A= >2012-10-19 23:20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Se= rver Responder=0A>2012-10-19 23:20:48,790 ERROR org.apache.hadoop.hdfs.serv= er.namenode.NameNode: java.io.IOException: Incomplete HDFS URI, no host: hd= fs://sk.r252.0:54310=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hd= fs.DistributedFileSystem.initialize(DistributedFileSystem.java:78)=0A>=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.fs.FileSystem.createFileSyste= m(FileSystem.java:1378)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop= .fs.FileSystem.access$200(FileSystem.java:66)=0A>=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)=0A>= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.fs.FileSystem.get(FileSyst= em.java:196)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.fs.FileSys= tem.get(FileSystem.java:95)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0=C2=A0 at org.apa= che.hadoop.fs.Trash.(Trash.java:62)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at= org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode= .java:208)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.= namenode.NameNode.initialize(NameNode.java:204)=0A>=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java= :279)=0A>=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.namen= ode.NameNode.createNameNode(NameNode.java:956)=0A>=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:9= 65)=0A>=0A>=0A>2012-10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.n= amenode.NameNode: SHUTDOWN_MSG:=0A>/***************************************= *********************=0A>SHUTDOWN_MSG: Shutting down NameNode at sk.r252.0/= 10.0.2.15=0A>=0A>=0A>***core-site.xml***=0A>=0A>=0A>=0A>=0A>=0A>=0A>=0A>=0A>=0A>=C2=A0 hadoop.tmp.dir=0A>=C2=A0 /app/hadoop/tmp=0A>=C2=A0 A base for other temporary= directories.=0A>=0A>=0A>=0A>=0A>=C2=A0 = fs.default.name=0A>=C2=A0 hdfs://sk.r252.0:54310=0A>=C2=A0 The name of the default file system. =C2=A0A URI w= hose=0A>=C2=A0 scheme and authority determine the FileSystem implementation= . =C2=A0The=0A>=C2=A0 uri's scheme determines the config property (fs.SCHEM= E.impl) naming=0A>=C2=A0 the FileSystem implementation class. =C2=A0The uri= 's authority is used to=0A>=C2=A0 determine the host, port, etc. for a file= system.=0A>=0A>=0A>=0A>=0A>***mapr= ed-site.xml***=0A>=0A>=0A>=0A>=0A>=0A>=0A>=0A>=0A>=0A>= =C2=A0 mapred.job.tracker=0A>=C2=A0 sk.r252.0:54311=0A>=C2=A0 The host and port that the MapReduce job tracke= r runs=0A>=C2=A0 at. =C2=A0If "local", then jobs are run in-process as a si= ngle map=0A>=C2=A0 and reduce task.=0A>=C2=A0 =0A>= =0A>=0A>=0A>=0A>***hdfs-site.xml***=0A>=0A>=0A>= =0A>=0A>=0A>=0A>= =0A>=0A>=0A>=C2=A0 dfs.replication=0A= >=C2=A0 2=0A>=C2=A0 Default block replication.= =0A>=C2=A0 The actual number of replications can be specified when the file= is created.=0A>=C2=A0 The default is used if replication is not specified = in create time.=0A>=C2=A0 =0A>=0A>=0A>= =C2=A0 =C2=A0 dfs.http.address=0A>=C2=A0 =C2=A0 0.0.0.0= :50070=0A>=C2=A0 =0A>=0A>=0A>=0A>Can som= e one please help me.=0A>=0A>=0A>Regards=C2=A0=0A>Sundeep=0A>=0A>=0A=0A-- = =0AThanks=0A-balaji=0A--=0Ahttp://balajin.net/blog/=0Ahttp://flic.kr/balaji= jegan ---1738200586-1674265269-1350738956=:51816 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable
Thank  You Balaji,=
I ch= ecked gethostbyname(sk.r252.0) it gives 10.0.2.15. This is ipaddress i am g= etting in ifconfig also.
ssh sk.r252.0 is sshing to 10.0.2.15
p= ing sk.r252.0 is pinging to 10.0.2.15.

<= span>Can you pleas= e help me with the issue?

Regards
Sundeep



= From: Balaji Narayanan (=E0=AE= =AA=E0=AE=BE=E0=AE=B2=E0=AE=BE=E0=AE=9C=E0=AE=BF =E0=AE=A8=E0=AE=BE=E0=AE= =B0=E0=AE=BE=E0=AE=AF=E0=AE=A3=E0=AE=A9=E0=AF=8D) <lists@balajin.net>=
To: "user@hadoop.apache.org" <u= ser@hadoop.apache.org>; Sundeep Kambhmapati <ksundeepsatya@yahoo.co.i= n>
Sent: Saturday,= 20 October 2012 2:12 AM
Subject:= Re: Namenode shutting down while creating cluster
<= /div>
Seems like an issue with resolution of s= k.r252.0. Can you ensure that it resolves?

On Friday, O= ctober 19, 2012, Sundeep Kambhmapati wrote:
=0A
Hi Users,
=0AMy name no= de is shutting down soon after it is started. 
Here the log. Can some one please help me.
=0A

2012-10-19 23:20:42,143 INFO=0A org.apa= che.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
=0A
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host =3D sk.r252.0/10.0.2.15
=0A
STARTUP_MSG:   args =3D []
START= UP_MSG:   version =3D 0.20.2
=0A
STARTUP_MS= G:   build =3D=0A https://svn.= apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; comp= iled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
=0A
************************************************************/<= /div>
2012-10-19 23:20:42,732 INFO org.apache.hadoop.ipc.met= rics.RpcMetrics: Initializing RPC Metrics with hostName=3DNameNode, port=3D= 54310
=0A
2012-10-19 23:20:42,741 INFO org.apach= e.hadoop.hdfs.server.namenode.NameNode: Namenode up at: sk.r252.0/10.0.2.15:54= 310
=0A
2012-10-19 23:20:42,745 INFO org.apa= che.hadoop.metrics.jvm.JvmMetrics:=0A Initializing JVM Metrics with process= Name=3DNameNode, sessionId=3Dnull
2012-10-19 23= :20:42,747 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetr= ics: Initializing NameNodeMeterics using context object:org.apache.hadoop.m= etrics.spi.NullContext
=0A
2012-10-19 23:20:43,0= 74 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=3Droot= ,root,bin,daemon,sys,adm,disk,wheel
=0A
2012-10-= 19 23:20:43,077 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: s= upergroup=3Dsupergroup
=0A2012-10-19 23:20:43,0= 77 INFO=0A org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissio= nEnabled=3Dtrue
2012-10-19 23:20:43,231 INFO or= g.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics: Initializ= ing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.= NullContext
=0A
2012-10-19 23:20:43,239 INFO org= .apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemSt= atusMBean
=0A2012-10-19 23:20:43,359 INFO org.a= pache.hadoop.hdfs.server.common.Storage: Number of files =3D 1
=
2012-10-19 23:20:43,379 INFO=0A org.apache.hadoop.hdfs.serv= er.common.Storage: Number of files under construction =3D 0
2012-10-19 23:20:43,379 INFO org.apache.hadoop.hdfs.server.com= mon.Storage: Image file of size 94 loaded in 0 seconds.
=0A2012-10-19 23:20:43,380 INFO org.apache.hadoop.hdfs.server.comm= on.Storage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edi= ts # 0 loaded in 0 seconds.
=0A
2012-10-19 23:20= :43,415 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file of si= ze 94 saved in 0 seconds.
=0A2012-10-19 23:20:4= 3,612 INFO=0A org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished= loading FSImage in 758 msecs
2012-10-19 23:20:= 43,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total numb= er of blocks =3D 0
=0A
2012-10-19 23:20:43,615 I= NFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid = blocks =3D 0
=0A2012-10-19 23:20:43,615 INFO or= g.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicat= ed blocks =3D 0
2012-10-19 23:20:43,615 INFO or= g.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of=0A  over-= replicated blocks =3D 0
2012-10-19 23:20:43,615= INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 = secs.
=0A
2012-10-19 23:20:43,616 INFO org.apach= e.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datano= des
=0A2012-10-19 23:20:43,616 INFO org.apache= .hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks<= /div>
2012-10-19 23:20:44,450 INFO org.mortbay.log: Logging = to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.S= lf4jLog
=0A
2012-10-19 23:20:44,711 INFO org.apa= che.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].g= etLocalPort() before open() is -1. Opening the listener on 50070=0A
2012-10-19 23:20:44,715 INFO org.apache.hadoop.http.Ht= tpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[= 0].getLocalPort() returned 50070
=0A
2012-10-19 = 23:20:44,715 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50= 070
=0A2012-10-19 23:20:44,715 INFO org.mortba= y.log: jetty-6.1.14
2012-10-19 23:20:47,021 INF= O org.mortbay.log: Started SelectChannelConnector@0.0.0.0= :50070
=0A
2012-10-19 23:20:47,022 INFO org.= apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070=
=0A
2012-10-19 23:20:47,022 INFO org.apach= e.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
=0A
2012-10-19 23:20:47,067 INFO org.apache.hadoop.i= pc.Server: IPC Server listener on 54310: starting
=0A2012-10-19 23:20:47,086 INFO org.apache.hadoop.ipc.Server:=0A IPC Ser= ver handler 0 on 54310: starting
2012-10-19 23:= 20:47,089 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting=
=0A
2012-10-19 23:20:47,106 INFO org.apache.had= oop.ipc.Server: IPC Server handler 1 on 54310: starting
=0A2012-10-19 23:20:47,130 INFO org.apache.hadoop.ipc.Server: IPC = Server handler 2 on 54310: starting
2012-10-19 = 23:20:47,148 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 543= 10: starting
=0A
2012-10-19 23:20:47,165 INFO or= g.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310: starting<= /div>
=0A2012-10-19 23:20:47,183 INFO org.apache.hadoop.ipc.= Server: IPC Server handler 5 on 54310: starting
2012-10-19 23:20:47,200 INFO org.apache.hadoop.ipc.Server: IPC Server hand= ler 6 on 54310: starting
=0A
2012-10-19 23:20:47= ,803 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 54310: star= ting
=0A2012-10-19 23:20:47,804 INFO org.apache= .hadoop.ipc.Server: IPC Server handler 7 on 54310: starting
2012-10-19 23:20:47,806 INFO org.apache.hadoop.ipc.Server: IPC= Server handler 8 on 54310: starting
=0A
2012-10-19 23:20:48,685 WARN org.apache.hadoop.hdfs.server.namenode.FSNames= ystem: ReplicationMonitor thread received InterruptedException.java.lang.In= terruptedException: sleep interrupted
=0A
2012-10-19 23:20:48,691 INFO org.apache.hadoop.hdfs.server.namenode.FSName= system: Number of transactions: 0 Total time for transactions(ms): 0Number = of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
=0A
2012-10-19 23:20:= 48,690 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Int= errupted Monitor
=0A
java.lang.InterruptedException: sleep interrupted
        at java.lan= g.Thread.sleep(Native Method)
=0A
        at org.apache.hadoop.hdfs.serve= r.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
=0A
    &nb= sp;  =0A at java.lang.Thread.run(Thread.java:636)
2012-10-19 23:20:48,771 INFO org.apac= he.hadoop.ipc.Server: Stopping server on 54310
=0A
2012-10-19 23:20:48,775 INFO org.apache.hadoop.ipc.Server: IPC Server ha= ndler 0 on 54310: exiting
=0A2012-10-19 23:20:4= 8,780 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 54310: exi= ting
2012-10-19 23:20:48,781 INFO org.apache.ha= doop.ipc.Server: IPC Server handler 2 on 54310: exiting
=0A2012-10-19 23:20:48,782 INFO org.apache.hadoop.ipc.Server: I= PC Server handler 3 on 54310: exiting
2012-10-1= 9 23:20:48,783 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 5= 4310: exiting
=0A
<= font face=3D"tahoma, new york, times, serif">2012-10-19 23:20:48,784 INFO o= rg.apache.hadoop.ipc.Server: IPC Server handler 5 on 54310: exiting<= /div>
=0A2012-10-19 23:20:48,784 INFO org.apache.hadoop.ipc.= Server: IPC Server handler 6 on 54310: exiting
= 2012-10-19 23:20:48,785 INFO org.apache.hadoop.ipc.Server: IPC Server handl= er 7 on 54310:=0A exiting
2012-10-19 23:20:48,7= 85 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 54310: exitin= g
=0A2012-10-19 23:20:48,786 INFO org.apache.ha= doop.ipc.Server: IPC Server handler 9 on 54310: exiting
2012-10-19 23:20:48,786 INFO org.apache.hadoop.ipc.Server: Stoppin= g IPC Server listener on 54310
=0A
2012-10-19 23= :20:48,788 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder=
=0A2012-10-19 23:20:48,790=0A ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: = java.io.IOException: Incomplete HDFS URI, no host: hdfs://sk.r252.0:54310
=0A        at org.apa= che.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java= :78)
        at org.apache.= hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
=0A=
        at org.apache.hadoop.fs.FileSys= tem.access$200(FileSystem.java:66)
   = ;     at=0A org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.= java:1390)
        at org.a= pache.hadoop.fs.FileSystem.get(FileSystem.java:196)
=0A
        at org.apache.hadoop.fs.FileSystem.get(= FileSystem.java:95)
       =   at org.apache.hadoop.fs.Trash.<init>(Trash.java:= 62)
=0A
        at=0A= org.apache.hadoop.hdfs.server.namenode.NameNode.startTrashEmptier(NameNode= .java:208)
        at= org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:2= 04)
=0A
        at or= g.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:27= 9)
=0A
        at org= .apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:9= 56)
=0A
        at org= .apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:965)=
=0A
2012-= 10-19 23:20:48,995 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SH= UTDOWN_MSG:
=0A
/*************************= ***********************************
=0ASH= UTDOWN_MSG: Shutting down NameNode at sk.r252.0/10.0.2.15

=
=0A
***core-site.xml***
<?xml version=3D"1.0= "?>
=0A
<?xml-styles= heet type=3D"text/xsl" href=3D"configuration.xsl"?>

=0A<!-- Put site-specific property overrides in this file. --&= gt;

<configuration>
=0A<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/h= adoop/tmp</value>
= =0A =0A <description>A base for other temporary directories.<= /description>
</pro= perty>

=0A<property>
  <name>fs.default.name</n= ame>
  <value&= gt;hdfs://sk.r252.0:54310</value>
=0A
  <description>The name of the default file sy= stem.  A URI whose
&= nbsp; scheme and authority determine the FileSystem implementation.  T= he
=0A
  uri's scheme= determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class. =  The=0A uri's authority is used to
  determine the host, port, etc. for a filesystem.</= description>
</prop= erty>
=0A
</configur= ation>

=0A***mapred-site.xml***
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D"text/xsl" href=3D"configuration.xsl"?>=
=0A

<!-- Put=0A site-specific property ov= errides in this file. -->

<configuratio= n>
=0A<property>=
  <name>mapre= d.job.tracker</name>
  <value>sk.r252.0:54311</value>
=0A  <description>The host and port t= hat the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a si= ngle map
=0A  and re= duce task.
  </de= scription>
</proper= ty>
</configuration= >
=0A

= ***hdfs-site.xml***
=0A
<?xml version=3D"1.0"?>
<?xml-stylesheet type=3D= "text/xsl" href=3D"configuration.xsl"?>
=0A

<!-- Put site-specific property overrides in this file. -->

=0A<configuration>
<property>
  <name>dfs.replication</name>
=0A  <value>2</value>=
  <description&g= t;Default block replication.
  The actual number of replications can be specified when the file= is created.
=0A
  Th= e default is used if replication is not specified in create time.
  </description>
<= div style=3D"background-color:transparent;">=0A</property>
<property>
    <name>dfs.http.address&= lt;/name>
=0A  &n= bsp; <value>0.0.0.0:50070</value>
  </property>
</configuration>
=0A

=0ACan = some one please help me.

=0ARegards 
Sund= eep
=0A


--
Thanks
-balaji=0A


= ---1738200586-1674265269-1350738956=:51816--