hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Meng Mao" <meng...@gmail.com>
Subject Re: having different HADOOP_HOME for master and slaves?
Date Mon, 04 Aug 2008 20:17:25 GMT
I see. I think I could also modify the hadoop-env.sh in the new conf/
folders per datanode to point
to the right place for HADOOP_HOME.

On Mon, Aug 4, 2008 at 3:21 PM, Allen Wittenauer <aw@yahoo-inc.com> wrote:

>
>
>
> On 8/4/08 11:10 AM, "Meng Mao" <mengmao@gmail.com> wrote:
> > I suppose I could, for each datanode, symlink things to point to the
> actual
> > Hadoop installation. But really, I would like the setup that is hinted as
> > possible by statement 1). Is there a way I could do it, or should that
> bit
> > of documentation read, "All machines in the cluster _must_ have the same
> > HADOOP_HOME?"
>
>     If you run the -all scripts, they assume the location is the same.
> AFAIK, there is nothing preventing you from building your own -all scripts
> that point to the different location to start/stop the data nodes.
>
>
>


-- 
hustlin, hustlin, everyday I'm hustlin

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message