hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jose Vidal" <jmvi...@gmail.com>
Subject Re: newbie install
Date Tue, 22 Jul 2008 22:45:16 GMT
Yes, the host file just has:

127.0.0.1 localhost hermes.cse.sc.edu hermes

So, do I need to change the host file in all the slaves, or just the namenode?

I'm not root on these machines so changing these requires gentle
handling of our sysadmin....

Jose

On Tue, Jul 22, 2008 at 5:37 PM, Edward J. Yoon <edward@udanax.org> wrote:
> If you have a static address for the machine, make sure that your
> hosts file is pointing to the static address for the namenode host
> name as opposed to the 127.0.0.1 address. It should look something
> like this with the values replaced with your values.
>
> 127.0.0.1               localhost.localdomain localhost
> 192.x.x.x               yourhost.yourdomain.com yourhost
>
> - Edward
>
> On Wed, Jul 23, 2008 at 6:03 AM, Jose Vidal <jmvidal@gmail.com> wrote:
>> I'm trying to install hadoop on our linux machine but after
>> start-all.sh none of the slaves can connect:
>>
>> 2008-07-22 16:35:27,534 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting DataNode
>> STARTUP_MSG:   host = thetis/127.0.0.1
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.16.4
>> STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/bran
>> ch-0.16 -r 652614; compiled by 'hadoopqa' on Fri May  2 00:18:12 UTC 2008
>> ************************************************************/
>> 2008-07-22 16:35:27,643 WARN org.apache.hadoop.dfs.DataNode: Invalid directory i
>> n dfs.data.dir: directory is not writable: /work
>> 2008-07-22 16:35:27,699 INFO org.apache.hadoop.ipc.Client: Retrying connect to s
>> erver: hermes.cse.sc.edu/129.252.130.148:9000. Already tried 1 time(s).
>> 2008-07-22 16:35:28,700 INFO org.apache.hadoop.ipc.Client: Retrying connect to s
>> erver: hermes.cse.sc.edu/129.252.130.148:9000. Already tried 2 time(s).
>> 2008-07-22 16:35:29,700 INFO org.apache.hadoop.ipc.Client: Retrying connect to s
>> erver: hermes.cse.sc.edu/129.252.130.148:9000. Already tried 3 time(s).
>> 2008-07-22 16:35:30,701 INFO org.apache.hadoop.ipc.Client: Retrying connect to s
>> erver: hermes.cse.sc.edu/129.252.130.148:9000. Already tried 4 time(s).
>> 2008-07-22 16:35:31,702 INFO org.apache.hadoop.ipc.Client: Retrying connect to s
>> erver: hermes.cse.sc.edu/129.252.130.148:9000. Already tried 5 time(s).
>> 2008-07-22 16:35:32,702 INFO org.apache.hadoop.ipc.Client: Retrying connect to s
>> erver: hermes.cse.sc.edu/129.252.130.148:9000. Already tried 6 time(s).
>>
>> same for the tasktrackers (port 9001).
>>
>> I think the problem has something to do with name resolution. Check these out:
>>
>> jmvidal@hermes:~/hadoop-0.16.4> telnet hermes.cse.sc.edu 9000
>> Trying 127.0.0.1...
>> Connected to hermes.cse.sc.edu (127.0.0.1).
>> Escape character is '^]'.
>> bye
>> Connection closed by foreign host.
>>
>> jmvidal@hermes:~/hadoop-0.16.4> host hermes.cse.sc.edu
>> hermes.cse.sc.edu has address 129.252.130.148
>>
>> jmvidal@hermes:~/hadoop-0.16.4> telnet 129.252.130.148 9000
>> Trying 129.252.130.148...
>> telnet: connect to address 129.252.130.148: Connection refused
>> telnet: Unable to connect to remote host: Connection refused
>>
>> So, the first one connects but not the second one, but they both go to
>> the same machine:port. My guess is that the hadoop server is closing
>> the connection, but why?
>>
>> Thanks,
>> Jose
>>
>> --
>> Jose M. Vidal <jmvidal@gmail.com> http://jmvidal.cse.sc.edu
>> University of South Carolina http://www.multiagent.com
>>
>
>
>
> --
> Best regards,
> Edward J. Yoon,
> http://blog.udanax.org
>



-- 
Jose M. Vidal <jmvidal@gmail.com> http://jmvidal.cse.sc.edu
University of South Carolina http://www.multiagent.com

Mime
View raw message