hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Master /slave file configuration for multiple datanodes on same machine
Date Thu, 31 Jul 2014 04:13:36 GMT
The SSH-helped slave file approach will not work for the goal of
running multiple slave daemons per host, where different configuration
directories are instead expected to be used.

You can instead use "hadoop --config custom-dir datanode" to launch
them directly.

On Wed, Jul 30, 2014 at 1:24 PM, Sindhu Hosamane <sindhuht@gmail.com> wrote:
> Hello friends ,
> I have set up multiple datanodes on same machine following the link
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<A3EF3F6AF24E204B812D1D24CCC8D71A03688F76@mse16be2.mse16.exchange.ms>
> So now i have conf and conf2  both in my hadoop directory.
> How should master and slave files of conf and conf2 look like if i want conf
> to be  master and conf2 to be slave .?
> Also how should /etc/hosts file look like ?
> Please help me. I am really stuck
> Regards,
> Sindhu

Harsh J

View raw message