hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tiger Uppercut" <get2thachop...@gmail.com>
Subject Re: setting up hadoop on a single node, vanilla arguments
Date Tue, 27 Mar 2007 03:02:47 GMT
I get the similar errors when trying this:

[tiger]$bin/hadoop dfs -put ~/hadoop/hadoop_data/ ~/hadoop/hadoop_data
07/03/26 20:00:42 INFO ipc.Client: Retrying connect to server:
tiger.stanford.edu/xx.yy.zz.aa:9000. Already tried 1 time(s).
...

07/03/26 20:00:42 INFO ipc.Client: Retrying connect to server:
tiger.stanford.edu/xx.yy.zz.aa:9000. Already tried 10 time(s).

Bad connection to FS. command aborted.

On 3/26/07, Richard Yang <richardyang@richardyang.net> wrote:
>
> Is the HDFS working?  you can find out by moving files/folder from localFS to HDFS.
>
> -----Original Message-----
> From: Tiger Uppercut [mailto:get2thachopper@gmail.com]
> Sent: Mon 3/26/2007 12:54 PM
> To: hadoop-user@lucene.apache.org
> Subject: Re: setting up hadoop on a single node, vanilla arguments
>
> Resending...I think my message got bounced earlier:
>
> On 3/26/07, Tiger Uppercut <get2thachopper@gmail.com> wrote:
> > Thanks Philippe.
> >
> > Yeah, sorry, I should I have mentioned that I tried using the hostname
> > of my machine first, so I had the following hadoop-site.xml settings.
> >
> > <property>
> >   <name>fs.default.name</name>
> >   <value>tiger.stanford.edu:9000</value>
> > </property>
> >
> > <!-- map/reduce properties -->
> >
> > <property>
> >   <name>mapred.job.tracker</name>
> >   <value>tiger.stanford.edu:9001</value>
> > </property>
> >
> > But that still didn't work:
> >
> > tiger$ bin/hadoop jar hadoop-0.12.2-examples.jar wordcount input_dir output_dir
> >
> > 07/03/26 01:57:25 INFO ipc.Client: Retrying connect to server:
> > tiger.stanford.edu/
> > xx.yy.zz.aa:9000. Already tried 1 time(s).
> > ...
> > xx.yy.zz.aa:9000. Already tried 10 time(s).
> > java.lang.RuntimeException: java.net.ConnectException: Connection refused
> >
> > Separately Arun - I did have passphrase-less ssh enabled on this machine.
> >
> > i.e., I executed:
> >
> > ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
> > cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
> >
> > On 3/26/07, Philippe Gassmann <philippe.gassmann@anyware-tech.com> wrote:
> > > Hi,
> > >
> > > Tiger Uppercut a écrit :
> > > > <snip/>
> > > >
> > > > <property>
> > > >  <name>fs.default.name</name>
> > > >  <value>localhost:9000</value>
> > > > </property>
> > > >
> > > > <!-- map/reduce properties -->
> > > >
> > > > <property>
> > > >  <name>mapred.job.tracker</name>
> > > >  <value>localhost:9001</value>
> > > > </property>
> > > >
> > > For the fs.default.name and the mapred.job.tracker try to use the
> > > hostname of your machine instead of localhost. When using
> > > localhost:XXXX, hadoop servers are listen to the loopback interface. But
> > > mapreduce jobs (I do not know exactly where) are seeing that the
> > > connections to tasktrackers are issued using the 127.0.0.1 and are
> > > trying to reverse dns the adress. Your system will not return localhost
> > > but the real name of your machine. In most linux system, that name is
> > > binded to an ethernet interface so jobs will try to connect to that
> > > interface instead of the loopback one.
> > >
> > >
> > >
> > > > <property>
> > > >  <name>dfs.name.dir</name>
> > > >  <value>/some_dir/hadoop/hadoop_data</value>
> > > > </property>
> > > >
> > > > </configuration>
> > >
> > >
> >
>
>
>
>

Mime
View raw message