hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stephen Watt <sw...@us.ibm.com>
Subject Re: Hadoop on EC2
Date Tue, 24 Nov 2009 21:24:51 GMT
Hi Mark

Are you starting the clusters from the contrib/ec2 scripts ? These scripts 
have a special way of bringing up the cluster where they are passing in 
the hostnames of the slaves as they are being assigned from ec2, thus I 
think stop-all and start-all will not work as they both assume the slaves 
are defined in the slaves file. Its been awhile since I looked at this so 
excuse my lack of specifics. I believe there is a script in the /root 
directory of each ec2 image that these values are being passed into that 
does the work of starting the tasktracker/datanode processes on each one 
of these.

Kind regards
Steve Watt



From:
Mark Kerzner <markkerzner@gmail.com>
To:
core-user@hadoop.apache.org
Date:
11/24/2009 03:02 PM
Subject:
Hadoop on EC2



Hi,

I am starting a cluster of Apache Hadoop distributions, like .18 and also
.19. This all works fine, then I log in. I see that the Hadoop daemons are
already working. However, when I try

# which hadoop
/usr/local/hadoop-0.19.0/bin/hadoop
# jps
1355 Jps
1167 NameNode
1213 JobTracker
# hadoop fs -ls hdfs://localhost/
09/11/24 15:33:56 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:8020. Already tried 0 time(s).

I do stop-all.sh and then start-all.sh, and it does not help. What am I
doing wrong?

Thank you,
Mark



Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message