hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jason hadoop <jason.had...@gmail.com>
Subject Re: No route to host prevents from storing files to HDFS
Date Thu, 23 Apr 2009 13:53:47 GMT
Can you give us your network topology ?
I see that at least 3 ip addresses
192.168.253.20, 192.168.253.32 and 192.168.253.21

In particular the fs.default.name which you have provided, the
hadoop-site.xml for each machine,
the slaves file, with ip address mappings if needed and a netstat -a -n -t
-p | grep java (hopefully you run linux)
and the output of jps for each machine

That should let us see what servers are binding to what ports on what
machines, and what you cluster things should be happening.

Also iptables -L for each machine as an afterthought - just for paranoia's
sake

On Thu, Apr 23, 2009 at 2:45 AM, Stas Oskin <stas.oskin@gmail.com> wrote:

> Hi.
>
> Maybe, but there will still be at least one virtual network adapter on the
> > host. Try turning them off.
>
>
> Nope, still throws "No route to host" exceptions.
>
> I have another IP address defined on this machine - 192.168.253.21, for the
> same network adapter.
>
> Any idea if it has impact?
>
>
> >
> >
> >> The fs.default.name is:
> >> hdfs://192.168.253.20:8020
> >>
> >
> > what happens if you switch to hostnames over IP addresses?
>
>
> Actually, I never tried this, but point is that the HDFS worked just fine
> with this before.
>
> Regards.
>



-- 
Alpha Chapters of my book on Hadoop are available
http://www.apress.com/book/view/9781430219422

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message