hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "NoRouteNoHost" by NarendraShah
Date Thu, 24 Nov 2011 05:48:54 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "NoRouteNoHost" page has been changed by NarendraShah:
http://wiki.apache.org/hadoop/NoRouteNoHost?action=diff&rev1=3&rev2=5

Comment:
None of these are Hadoop problems, they are host, network and firewall configuration issues.
As it is your cluster, only you can find out and track down the problem.  

- = No Route to Host =
+ You get a NoRouteNoHost Exception when machine is not reachable. Or some firewall in network
blocking the connection.  If you do not know firewall/tcp connection., Please review network
specification and terms.
  
- You get a TCP No Route To Host Error -often wrapped in a Java {{{IOException}}}, when one
machine on the network does not know how to send TCP packets to the machine specified.
+ To resolve you can follow below step to verify. 
+ 1. Check the hostname/IP Address the client using is correct.
+ 2. Using ping utility(command included in all operating system), check there is no package
loss. If you find package loss, then there might be some network problem. So check with your
network Administrator.  
+ 2. Check the IP address the client gets for the hostname is correct, using nslookup.
+ 3. On the server, try a telnet localhost <port> to see if the port is open there.
+ 4. Try connecting to the server/port from a different machine, to see if it just the single
client misbehaving.
  
- Some possible causes (not an exclusive list):
-  * the hostname of the remote machine is wrong in the configuration files
-  * the client's host table {{{//etc/hosts}}} has an invalid IPAddress for the target host.
-  * the DNS server's host table has an invalid IPAddress for the target host.
-  * the client's routing tables (In Linux, iptables) are wrong.
-  * the DHCP server is publishing bad routing information.
-  * the client and server are on different subnets, and are not set up to talk to each other.
This may be an accident, or it is to deliberately lock down the Hadoop cluster. 
-  * the clients networking is down. Check the cables, then work your way up.
- 
- None of these are Hadoop problems, they are network configuration/router issues. As it is
your network, only you can find out and track down the problem. The command "ping" helps,
as does "nslookup". 
- 

Mime
View raw message