hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shi Yu <sh...@uchicago.edu>
Subject Re: Unknown Host Exception
Date Sun, 10 Oct 2010 15:44:10 GMT
Check you log, especially the hadoop-**-tasktracker-TEMP.log. What does 
it say?

On 2010-10-10 10:07, siddharth raghuvanshi wrote:
> Hi,
>
> Thanks for your reply..
>
> In browser,
>
> http://localhost:50030/jobtracker.jsp         is opening fine
> but
> http://localhost:50060/         is not.
>
> Since jobtracker is running, so I'm assuming localhost is reachable.. am I
> wrong??
>
> Regards
> Siddharth
>
> On Sun, Oct 10, 2010 at 8:24 PM, Shi Yu<shiyu@uchicago.edu>  wrote:
>
>    
>> Hi. Were you trying hadoop on your own computer or on a cluster? My guess
>> you were trying on your own computer. I once observed the same problem on my
>> laptop when I switched from wireless to fixed line connection, since the IP
>> address was changed but for some reason the configuration was not updated.
>> After restart the network service, the problem was fixed. The second
>> replication error is relevant to the first one because apparently the data
>> node is not running. So, you'd better double check the network connection of
>> the machine (make sure the "localhost" in your configuration file is
>> reachable).
>>
>> Shi
>>
>>
>> On 2010-10-10 9:21, siddharth raghuvanshi wrote:
>>
>>      
>>> Hi Shi,
>>>
>>> I am a beginner in Hadoop. I have given the following value in
>>> core-site.xml
>>> <name>hadoop.tmp.dir</name>
>>>   <value>/users/user/hadoop-datastore/hadoop</value>
>>>
>>>
>>> <name>fs.default.name</name>
>>>   <value>hdfs://localhost:54310</value>
>>> How will we check whether the host machine is reachable or not?
>>>
>>> Also, in mapred-site.xml, I have given
>>> <name>mapred.job.tracker</name>
>>>   <value>localhost:54311</value>
>>>
>>>
>>> Please check whether these values are correct or not, if not correct what
>>> should I do?
>>>
>>> Waiting for your reply
>>> Regards
>>> Siddharth
>>>
>>>
>>>
>>> On Sat, Oct 9, 2010 at 11:47 PM, Shi Yu<shiyu@uchicago.edu>   wrote:
>>>
>>>
>>>
>>>        
>>>> I suggest you change the hadoop.tmp.dir  value in hadoop-site.xml
>>>> (0.19.x)
>>>> and reformat, restart it. Also double check the host machine in
>>>> fs.default.name and mapred.job.tracker is reachable or not.
>>>>
>>>> Shi
>>>>
>>>>
>>>> On 2010-10-9 12:57, siddharth raghuvanshi wrote:
>>>>
>>>>
>>>>
>>>>          
>>>>> Hi,
>>>>> I am also getting the following error. Please tell me whether this error
>>>>> is
>>>>> related to the previous error which I asked an hour before or this is
a
>>>>> separate error...
>>>>>
>>>>> [user@cs-sy-249 hadoop]$ bin/hadoop dfs -copyFromLocal
>>>>> /users/user/Desktop/test_data/
>>>>> gutenberg
>>>>>
>>>>> 10/10/09 23:22:15 WARN hdfs.DFSClient: DataStreamer Exception:
>>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>>>>> /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
>>>>> instead
>>>>> of 1
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
>>>>>
>>>>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>
>>>>>          at
>>>>> java.lang.reflect.Method.invoke(Method.java:597)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
>>>>>
>>>>>          at java.security.AccessController.doPrivileged(Native
>>>>> Method)
>>>>>          at
>>>>> javax.security.auth.Subject.doAs(Subject.java:396)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)
>>>>>
>>>>>
>>>>>          at org.apache.hadoop.ipc.Client.call(Client.java:817)
>>>>>          at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
>>>>>          at $Proxy0.addBlock(Unknown Source)
>>>>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>
>>>>>          at
>>>>> java.lang.reflect.Method.invoke(Method.java:597)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>>
>>>>>          at $Proxy0.addBlock(Unknown
>>>>> Source)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
>>>>>
>>>>>
>>>>> 10/10/09 23:22:15 WARN hdfs.DFSClient: Error Recovery for block null
bad
>>>>> datanode[0] nodes ==
>>>>> null
>>>>>
>>>>> 10/10/09 23:22:15 WARN hdfs.DFSClient: Could not get block locations.
>>>>> Source
>>>>> file "/user/user/gutenberg/pg4300.txt" -
>>>>> Aborting...
>>>>> copyFromLocal: java.io.IOException: File /user/user/gutenberg/pg4300.txt
>>>>> could only be replicated to 0 nodes, instead of
>>>>> 1
>>>>> 10/10/09 23:22:15 ERROR hdfs.DFSClient: Exception closing file
>>>>> /user/user/gutenberg/pg4300.txt : org.apache.hadoop.ipc.RemoteException:
>>>>> java.io.IOException: File /user/user/gutenberg/pg4300.txt could only
be
>>>>> replicated to 0 nodes, instead of 1
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
>>>>>
>>>>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>
>>>>>          at
>>>>> java.lang.reflect.Method.invoke(Method.java:597)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
>>>>>
>>>>>          at java.security.AccessController.doPrivileged(Native
>>>>> Method)
>>>>>          at
>>>>> javax.security.auth.Subject.doAs(Subject.java:396)
>>>>>
>>>>>          at
>>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)
>>>>>
>>>>>
>>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>>>>> /user/user/gutenberg/pg4300.txt could only be replicated to 0 nodes,
>>>>> instead
>>>>> of 1
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1310)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:469)
>>>>>
>>>>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>> Method)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>          at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>          at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:512)
>>>>>          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:968)
>>>>>          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:964)
>>>>>          at java.security.AccessController.doPrivileged(Native Method)
>>>>>          at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>>          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:962)
>>>>>
>>>>>          at org.apache.hadoop.ipc.Client.call(Client.java:817)
>>>>>          at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
>>>>>          at $Proxy0.addBlock(Unknown Source)
>>>>>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>          at
>>>>>
>>>>>
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>          at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>>          at $Proxy0.addBlock(Unknown Source)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3000)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2881)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1900(DFSClient.java:2139)
>>>>>          at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2329)
>>>>> [user@cs-sy-249 hadoop]$
>>>>>
>>>>>
>>>>> Regards
>>>>> Siddharth
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Oct 9, 2010 at 10:39 PM, siddharth raghuvanshi<
>>>>> track009.siddharth@gmail.com>    wrote:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>            
>>>>>> Hi,
>>>>>>
>>>>>> When I am running the following command in Mandriva Linux
>>>>>>   hadoop namenode -format
>>>>>>
>>>>>> I am getting the following error:
>>>>>>
>>>>>> 10/10/09 22:32:07 INFO namenode.NameNode: STARTUP_MSG:
>>>>>> /************************************************************
>>>>>> STARTUP_MSG: Starting NameNode
>>>>>> STARTUP_MSG:   host = java.net.UnknownHostException:
>>>>>> cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
>>>>>> STARTUP_MSG:   args = [-format]
>>>>>> STARTUP_MSG:   version = 0.20.2+320
>>>>>> STARTUP_MSG:   build =  -r 9b72d268a0b590b4fd7d13aca17c1c453f8bc957;
>>>>>> compiled by 'root' on Mon Jun 28 19:13:09 EDT 2010
>>>>>> ************************************************************/
>>>>>> Re-format filesystem in
>>>>>> /users/user/hadoop-datastore/hadoop-user/dfs/name
>>>>>> ?
>>>>>> (Y or N) Y
>>>>>> 10/10/09 22:32:11 INFO namenode.FSNamesystem: fsOwner=user,user
>>>>>> 10/10/09 22:32:11 INFO namenode.FSNamesystem: supergroup=supergroup
>>>>>> 10/10/09 22:32:11 INFO namenode.FSNamesystem: isPermissionEnabled=true
>>>>>> 10/10/09 22:32:12 INFO metrics.MetricsUtil: Unable to obtain hostName
>>>>>> java.net.UnknownHostException: cs-sy-249.cse.iitkgp.ernet.in:
>>>>>> cs-sy-249.cse.iitkgp.ernet.in
>>>>>>          at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
>>>>>>          at
>>>>>> org.apache.hadoop.metrics.MetricsUtil.getHostName(MetricsUtil.java:91)
>>>>>>          at
>>>>>> org.apache.hadoop.metrics.MetricsUtil.createRecord(MetricsUtil.java:80)
>>>>>>          at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.initialize(FSDirectory.java:78)
>>>>>>          at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSDirectory.<init>(FSDirectory.java:73)
>>>>>>          at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:383)
>>>>>>          at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:904)
>>>>>>          at
>>>>>>
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:998)
>>>>>>          at
>>>>>>
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1015)
>>>>>> 10/10/09 22:32:12 INFO common.Storage: Image file of size 94 saved
in 0
>>>>>> seconds.
>>>>>> 10/10/09 22:32:12 INFO common.Storage: Storage directory
>>>>>> /users/user/hadoop-datastore/hadoop-user/dfs/name has been successfully
>>>>>> formatted.
>>>>>> 10/10/09 22:32:12 INFO namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException:
>>>>>> cs-sy-249.cse.iitkgp.ernet.in: cs-sy-249.cse.iitkgp.ernet.in
>>>>>> ************************************************************/
>>>>>>
>>>>>> Please help me in solving this problem.
>>>>>>
>>>>>> Thanks
>>>>>> Regards
>>>>>> Siddharth
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>              
>>>>>
>>>>>
>>>>>            
>>>>
>>>>
>>>>
>>>>          
>>>
>>>        
>>
>> --
>> Postdoctoral Scholar
>> Institute for Genomics and Systems Biology
>> Department of Medicine, the University of Chicago
>> Knapp Center for Biomedical Discovery
>> 900 E. 57th St. Room 10148
>> Chicago, IL 60637, US
>> Tel: 773-702-6799
>>
>>
>>      
>    


-- 
Postdoctoral Scholar
Institute for Genomics and Systems Biology
Department of Medicine, the University of Chicago
Knapp Center for Biomedical Discovery
900 E. 57th St. Room 10148
Chicago, IL 60637, US
Tel: 773-702-6799


Mime
View raw message