hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hemanth Yamijala <yhema...@thoughtworks.com>
Subject Re: how to specify the root directory of hadoop on slave node?
Date Wed, 12 Sep 2012 04:06:29 GMT
Hi Richard,

If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.

Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.

Thanks
Hemanth

On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tristartom.tech@gmail.com>wrote:

> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> normally..
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?
>
> Regards,
> Richard
>

Mime
View raw message