hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Meng Mao" <meng...@gmail.com>
Subject having different HADOOP_HOME for master and slaves?
Date Mon, 04 Aug 2008 18:10:25 GMT
I'm trying to set up 2 Hadoop installations on my master node, one of which
will have permissions that allow more users to run Hadoop.
But I don't really need anything different on the datanodes, so I'd like to
keep those as-is. With that switch, the HADOOP_HOME on the master will be
different from that on the datanodes.

After shutting down the old hadoop, I tried to start-all the new one, and
encountered this:
$ bin/stop-all.sh
no jobtracker to stop
node2: bash: line 0: cd: /new/dir/hadoop/bin/..: No such file or directory
node2: bash: /new/dir/hadoop/bin/hadoop-daemon.sh: No such file or directory

I consulted the documentation at:
which only has 2 bits of info on this --
1) "The root of the distribution is referred to as HADOOP_HOME. All machines
in the cluster usually have the same HADOOP_HOME path."
2) "Once all the necessary configuration is complete, distribute the files
to the HADOOP_CONF_DIR directory on all the machines, typically

So I forgot to do anything about the second instruction. After doing so, I
$ bin/stop-all.sh
no jobtracker to stop
node2: bash: /new/dir/hadoop/bin/hadoop-daemon.sh: No such file or directory

Ok, it found the config dir, but now it expects the binary to be located at
the same HADOOP_HOME that the master uses?

I suppose I could, for each datanode, symlink things to point to the actual
Hadoop installation. But really, I would like the setup that is hinted as
possible by statement 1). Is there a way I could do it, or should that bit
of documentation read, "All machines in the cluster _must_ have the same


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message