ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jiang, Ruhua" <rji...@akamai.com>
Subject Re: Manual install locations for Ambari, HDFS and HBase
Date Wed, 16 Mar 2016 19:29:27 GMT
I am also curious about the answer.

If you grep the code, you will notice SO MANY python/shell script hard code the absolute path
/var/xxx  /usr/xxx .  Not everyone can install software under /

Ruhua



From: Ganesh Viswanathan <gansvv@gmail.com<mailto:gansvv@gmail.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" <user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Wednesday, March 16, 2016 at 1:17 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" <user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Manual install locations for Ambari, HDFS and HBase

Hello-
I am trying to setup a hadoop cluster using custom locations. My root fs does not have enough
storage, but I have a large ephemeral storage disk that I could mount and use for installing
Ambari and HDFS, HBase.

Is there a configuration setting in Ambari that can help me move all Ambari+Hadoop related
storage (Ambari scripts, configs, logs, HDFS, HBase, Zkpr data, etc.) into  separate drives
(for eg., /hadoop, /hbase, /ambari, etc.).

I know it works when building HDFS and HBase from Apache sources and customizing the installation.
But I updated all the settings in step "Customize Services" in Ambari but still see Hadoop
conf directories missing and Ambari scripts running from /etc, /var etc. Is there a central
root.dir setting for each of these services (ambari and services deployed by ambari) that
can help me update the locations?


Thanks!
Ganesh
Mime
View raw message