hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <Jeff.Schm...@shell.com>
Subject RE: Noob question
Date Tue, 14 Jun 2011 21:32:54 GMT
Thanks for replying it was a dumb mistake I had 20.1 on the namenode and
20.2 on the slaves - problem solved

Thanks again for replying! Cheers!

-----Original Message-----
From: Thomas Graves [mailto:tgraves@yahoo-inc.com] 
Sent: Tuesday, June 14, 2011 4:30 PM
To: common-dev@hadoop.apache.org; Schmitz, Jeff GSUSI-PTT/TBIM
Subject: Re: Noob question

It looks like it thinks /usr/local/hadoop-0.20.1/  is $HADOOP_HOME. Did
you
install hadoop on all the slave boxes in same location as the box you
have
working?  I'm assuming you are using the start-all.sh scripts. That
script
goes to each slave box and tries to cd to $HADOOP_HOME and runs the
start
commands from there.

Tom


On 6/14/11 2:09 PM, "Jeff.Schmitz@shell.com" <Jeff.Schmitz@shell.com>
wrote:

> Hello there!  So I was running in Pseudo-distributed configuration and
> everything was working fine - So now I have some more nodes and am
> trying to run fully distributed I followed the docs and added the
slaves
> file...
> 
> Setup passphraseless ssh ...........
> 
>  
> 
> What am I missing getting this error at start up
> 
>  
> 
> 
>  
> 
>  
> 
> Cheers - 
> 
>  
> 
> Jeffery Schmitz
> Projects and Technology
> 3737 Bellaire Blvd Houston, Texas 77001
> Tel: +1-713-245-7326 Fax: +1 713 245 7678
> Email: Jeff.Schmitz@shell.com <mailto:Jeff.Schmitz@shell.com>
> 
> "TK-421, why aren't you at your post?"
> 
>  
> 
>  
> 




Mime
View raw message