Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 52151 invoked from network); 10 Oct 2010 15:44:08 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 10 Oct 2010 15:44:08 -0000 Received: (qmail 43476 invoked by uid 500); 10 Oct 2010 15:44:05 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 43391 invoked by uid 500); 10 Oct 2010 15:44:05 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 43383 invoked by uid 99); 10 Oct 2010 15:44:05 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 10 Oct 2010 15:44:05 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [128.135.249.245] (HELO authsmtp00.uchicago.edu) (128.135.249.245) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 10 Oct 2010 15:44:00 +0000 Received: from [192.168.1.65] (99-126-240-120.lightspeed.cicril.sbcglobal.net [99.126.240.120]) (authenticated bits=0) by authsmtp00.uchicago.edu (8.13.1/8.13.1) with ESMTP id o9AFhcBQ025844 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sun, 10 Oct 2010 10:43:39 -0500 Message-ID: <4CB1DF4A.8020306@uchicago.edu> Date: Sun, 10 Oct 2010 10:44:10 -0500 From: Shi Yu User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.12) Gecko/20100914 Thunderbird/3.0.8 MIME-Version: 1.0 To: common-user@hadoop.apache.org Subject: Re: Unknown Host Exception References: <4CB0B1A3.2070909@uchicago.edu> <4CB1D3A4.1040505@uchicago.edu> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Check you log, especially the hadoop-**-tasktracker-TEMP.log. What does it say? On 2010-10-10 10:07, siddharth raghuvanshi wrote: > Hi, > > Thanks for your reply.. > > In browser, > > http://localhost:50030/jobtracker.jsp is opening fine > but > http://localhost:50060/ is not. > > Since jobtracker is running, so I'm assuming localhost is reachable.. am I > wrong?? > > Regards > Siddharth > > On Sun, Oct 10, 2010 at 8:24 PM, Shi Yu wrote: > > >> Hi. Were you trying hadoop on your own computer or on a cluster? My guess >> you were trying on your own computer. I once observed the same problem on my >> laptop when I switched from wireless to fixed line connection, since the IP >> address was changed but for some reason the configuration was not updated. >> After restart the network service, the problem was fixed. The second >> replication error is relevant to the first one because apparently the data >> node is not running. So, you'd better double check the network connection of >> the machine (make sure the "localhost" in your configuration file is >> reachable). >> >> Shi >> >> >> On 2010-10-10 9:21, siddharth raghuvanshi wrote: >> >> >>> Hi Shi, >>> >>> I am a beginner in Hadoop. I have given the following value in >>> core-site.xml >>> hadoop.tmp.dir >>> /users/user/hadoop-datastore/hadoop >>> >>> >>> fs.default.name >>> hdfs://localhost:54310 >>> How will we check whether the host machine is reachable or not? >>> >>> Also, in mapred-site.xml, I have given >>> mapred.job.tracker >>> localhost:54311 >>> >>> >>> Please check whether these values are correct or not, if not correct what >>> should I do? >>> >>> Waiting for your reply >>> Regards >>> Siddharth >>> >>> >>> >>> On Sat, Oct 9, 2010 at 11:47 PM, Shi Yu wrote: >>> >>> >>> >>> >>>> I suggest you change the hadoop.tmp.dir value in hadoop-site.xml >>>> (0.19.x) >>>> and reformat, restart it. Also double check the host machine in >>>> fs.default.name and mapred.job.tracker is reachable or not. >>>> >>>> Shi >>>> >>>> >>>> On 2010-10-9 12:57, siddharth raghuvanshi wrote: >>>> >>>> >>>> >>>> >>>>> Hi, >>>>> I am also getting the following error. Please tell me whether this error >>>>> is >>>>> related to the previous error which I asked an hour before or this is a >>>>> separate error... >>>>> >>>>> [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal >>>>> /users/user/Desktop/test_data/ >>>>> gutenberg >>>>> >>>>> 10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception: >>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File >>>>> /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes, >>>>> instead >>>>> of 1 >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469) >>>>> >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >>>>> Method) >>>>> at >>>>> >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> >>>>> at >>>>> >>>>> >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> >>>>> at >>>>> java.lang.reflect.Method.invoke(Method.java:597) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964) >>>>> >>>>> at java.security.AccessController.doPrivileged(Native >>>>> Method) >>>>> at >>>>> javax.security.auth.Subject.doAs(Subject.java:396) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:962) >>>>> >>>>> >>>>> at org.apache.hadoop.ipc.Client.call(Client.java:817) >>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221) >>>>> at $Proxy0.addBlock(Unknown Source) >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>> at >>>>> >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> at >>>>> >>>>> >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> >>>>> at >>>>> java.lang.reflect.Method.invoke(Method.java:597) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) >>>>> >>>>> at $Proxy0.addBlock(Unknown >>>>> Source) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329) >>>>> >>>>> >>>>> 10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null bad >>>>> datanode[0] nodes == >>>>> null >>>>> >>>>> 10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations. >>>>> Source >>>>> file "/user/user/gutenberg/pg4300.txt" - >>>>> Aborting... >>>>> copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt >>>>> could only be replicated to 0 nodes, instead of >>>>> 1 >>>>> 10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file >>>>> /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException: >>>>> java.io.IOException: File /user/user/gutenberg/pg4300.txt could only be >>>>> replicated to 0 nodes, instead of 1 >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469) >>>>> >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >>>>> Method) >>>>> at >>>>> >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> >>>>> at >>>>> >>>>> >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> >>>>> at >>>>> java.lang.reflect.Method.invoke(Method.java:597) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964) >>>>> >>>>> at java.security.AccessController.doPrivileged(Native >>>>> Method) >>>>> at >>>>> javax.security.auth.Subject.doAs(Subject.java:396) >>>>> >>>>> at >>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:962) >>>>> >>>>> >>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File >>>>> /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes, >>>>> instead >>>>> of 1 >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310) >>>>> >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469) >>>>> >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native >>>>> Method) >>>>> at >>>>> >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> >>>>> at >>>>> >>>>> >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> at java.lang.reflect.Method.invoke(Method.java:597) >>>>> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512) >>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968) >>>>> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964) >>>>> at java.security.AccessController.doPrivileged(Native Method) >>>>> at javax.security.auth.Subject.doAs(Subject.java:396) >>>>> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962) >>>>> >>>>> at org.apache.hadoop.ipc.Client.call(Client.java:817) >>>>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221) >>>>> at $Proxy0.addBlock(Unknown Source) >>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>> at >>>>> >>>>> >>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) >>>>> at >>>>> >>>>> >>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) >>>>> at java.lang.reflect.Method.invoke(Method.java:597) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) >>>>> at $Proxy0.addBlock(Unknown Source) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139) >>>>> at >>>>> >>>>> >>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329) >>>>> [user@cs-sy-249 hadoop]$ >>>>> >>>>> >>>>> Regards >>>>> Siddharth >>>>> >>>>> >>>>> >>>>> >>>>> On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi< >>>>> track009.siddharth@gmail.com> wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>>> Hi, >>>>>> >>>>>> When I am running the following command in Mandriva Linux >>>>>> hadoop namenode -format >>>>>> >>>>>> I am getting the following error: >>>>>> >>>>>> 10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG: >>>>>> /************************************************************ >>>>>> STARTUP_MSG: Starting NameNode >>>>>> STARTUP_MSG: host = java.net.UnknownHostException: >>>>>> cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in >>>>>> STARTUP_MSG: args = [-format] >>>>>> STARTUP_MSG: version = 0.20.2+320 >>>>>> STARTUP_MSG: build = -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957; >>>>>> compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010 >>>>>> ************************************************************/ >>>>>> Re-format filesystem in >>>>>> /users/user/hadoop-datastore/hadoop-user/dfs/name >>>>>> ? >>>>>> (Y or N) Y >>>>>> 10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user >>>>>> 10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup >>>>>> 10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true >>>>>> 10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName >>>>>> java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in: >>>>>> cs-sy-249.cse.iitkgp.ernet.in >>>>>> at java.net.InetAddress.getLocalHost(InetAddress.java:1353) >>>>>> at >>>>>> org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91) >>>>>> at >>>>>> org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80) >>>>>> at >>>>>> >>>>>> >>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78) >>>>>> at >>>>>> >>>>>> >>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.(FSDirectory.java:73) >>>>>> at >>>>>> >>>>>> >>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:383) >>>>>> at >>>>>> >>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904) >>>>>> at >>>>>> >>>>>> >>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998) >>>>>> at >>>>>> >>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015) >>>>>> 10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved in 0 >>>>>> seconds. >>>>>> 10/10/09 22:32:12 INFO common.Storage: Storage directory >>>>>> /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully >>>>>> formatted. >>>>>> 10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG: >>>>>> /************************************************************ >>>>>> SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: >>>>>> cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in >>>>>> ************************************************************/ >>>>>> >>>>>> Please help me in solving this problem. >>>>>> >>>>>> Thanks >>>>>> Regards >>>>>> Siddharth >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> >>>> >>> >>> >> >> -- >> Postdoctoral Scholar >> Institute for Genomics and Systems Biology >> Department of Medicine, the University of Chicago >> Knapp Center for Biomedical Discovery >> 900 E. 57th St. Room 10148 >> Chicago, IL 60637, US >> Tel: 773-702-6799 >> >> >> > -- Postdoctoral Scholar Institute for Genomics and Systems Biology Department of Medicine, the University of Chicago Knapp Center for Biomedical Discovery 900 E. 57th St. Room 10148 Chicago, IL 60637, US Tel: 773-702-6799