hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tiger Uppercut" <get2thachop...@gmail.com>
Subject Re: setting up hadoop on a single node, vanilla arguments
Date Mon, 26 Mar 2007 09:03:43 GMT
Thanks Philippe.

Yeah, sorry, I should I have mentioned that I tried using the hostname
of my machine first, so I had the following hadoop-site.xml settings.

<property>
  <name>fs.default.name</name>
  <value>tiger.stanford.edu:9000</value>
</property>

<!-- map/reduce properties -->

<property>
  <name>mapred.job.tracker</name>
  <value>tiger.stanford.edu:9001</value>
</property>

But that still didn't work:

tiger$ bin/hadoop jar hadoop-0.12.2-examples.jar wordcount input_dir output_dir

07/03/26 01:57:25 INFO ipc.Client: Retrying connect to server:
tiger.stanford.edu/
xx.yy.zz.aa:9000. Already tried 1 time(s).
...
xx.yy.zz.aa:9000. Already tried 10 time(s).
java.lang.RuntimeException: java.net.ConnectException: Connection refused

Separately Arun - I did have passphrase-less ssh enabled on this machine.

i.e., I executed:

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

On 3/26/07, Philippe Gassmann <philippe.gassmann@anyware-tech.com> wrote:
> Hi,
>
> Tiger Uppercut a écrit :
> > <snip/>
> >
> > <property>
> >  <name>fs.default.name</name>
> >  <value>localhost:9000</value>
> > </property>
> >
> > <!-- map/reduce properties -->
> >
> > <property>
> >  <name>mapred.job.tracker</name>
> >  <value>localhost:9001</value>
> > </property>
> >
> For the fs.default.name and the mapred.job.tracker try to use the
> hostname of your machine instead of localhost. When using
> localhost:XXXX, hadoop servers are listen to the loopback interface. But
> mapreduce jobs (I do not know exactly where) are seeing that the
> connections to tasktrackers are issued using the 127.0.0.1 and are
> trying to reverse dns the adress. Your system will not return localhost
> but the real name of your machine. In most linux system, that name is
> binded to an ethernet interface so jobs will try to connect to that
> interface instead of the loopback one.
>
>
>
> > <property>
> >  <name>dfs.name.dir</name>
> >  <value>/some_dir/hadoop/hadoop_data</value>
> > </property>
> >
> > </configuration>
>
>

Mime
View raw message