hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Richard Tang <tristartom.t...@gmail.com>
Subject Re: how to specify the root directory of hadoop on slave node?
Date Sun, 16 Sep 2012 15:44:34 GMT
Hi, Hemanth, thanks for your responses. I have now structure my hdfs
cluster to follow that norm. the two conditions are met. and no need to
explicitly config home address for hdfs anymore.

For the records, previously in my cluster, different nodes have hadoop
installed on different directories and HADOOP_HOME can be used to config
the home dir where hadoop is installed, (though the use of it is now
deprecated. )

Regards,
Richard

On Wed, Sep 12, 2012 at 12:06 AM, Hemanth Yamijala <
yhemanth@thoughtworks.com> wrote:

> Hi Richard,
>
> If you have installed the hadoop software on the same locations on all
> machines and if you have a common user on all the machines, then there
> should be no explicit need to specify anything more on the slaves.
>
> Can you tell us whether the above two conditions are true ? If yes, some
> more details on what is failing when you run start-dfs.sh will help.
>
> Thanks
> Hemanth
>
>
> On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tristartom.tech@gmail.com>wrote:
>
>> Hi, All
>> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
>> two datanodes on two other machines. But after setting datanode
>> machiines in conf/slaves file, running bin/start-dfs.sh can not start
>> hdfs normally..
>> I am aware that I have not specify the root directory hadoop is installed
>> on slave nodes and the OS user account to use hadoop on slave node.
>> I am asking how to specify where hadoop/hdfs is locally installed on
>> slave node? Also how to specify the user account to start hdfs there?
>>
>> Regards,
>> Richard
>>
>
>

Mime
View raw message