ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sumit Mohanty <smoha...@hortonworks.com>
Subject Re: Provisioning 2 clusters which sharing the same HDFS
Date Mon, 24 Mar 2014 01:59:15 GMT
Glad, it worked. I will add it to the wiki for others.


On Sun, Mar 23, 2014 at 3:37 PM, Anfernee Xu <anfernee.xu@gmail.com> wrote:

> Thanks, your suggestions is really helpful. Here's what  did
>
> 1. Create a normal cluster(with HDFS)
> 2. shutdown cluster
> 3. remove HDFS service from stack.
>    curl -H "X-Requested-By: ambari" -u admin:admin -X DELETE  http://
> <ambari_server:port/api/v1/clusters/<clusterName>/services/HDFS
> 4. Configure core-site
>    su - hadoop
>     /var/lib/ambari-server/resources/scripts/configs.sh -port <port> set
> <ambaris_host> <cluster_name> core-site "fs.defaultFS"
> "hdfs://slc00dgd:55310"
>
>
>
>
> On Sat, Mar 22, 2014 at 7:56 AM, Sumit Mohanty <smohanty@hortonworks.com>wrote:
>
>> This should be possible through API (I have not tried it myself).
>>
>> Here is what you are trying:
>> * Define a cluster with no HDFS (say just YARN and ZK)
>> * Add necessary configs for YARN and ZK
>> * Add/modify core-site and hdfs-site to have the correct property values
>> to point to the other cluster
>> * Start all services
>> You can do all the above with APIs.
>>
>> A way to achieve it with as much help as possible from the Web FE:
>> * Create a cluster with HDFS, YARN, ZK (possibly Ganglia and Nagios if
>> you need them)
>> * After everything is setup and started correctly - stop all services
>> * Delete HDFS using APIs
>> * Modify hdfs-site and core-site to point to the other cluster (use
>> configs.sh)
>> * Start all services
>> * Afterwards, you can clean up the left over HDFS files/folders on this
>> cluster.
>>
>> *The above strategy is theoretically possible but I have not tried it*.
>> So do try it on a test cluster first. The Apache Ambari WIKI has pages
>> talking about sample API calls.
>>
>> Feel free to write up a summary if you go the above route and we can add
>> it to the wiki.
>>
>> -Sumit
>>
>>
>> On Fri, Mar 21, 2014 at 1:15 PM, Anfernee Xu <anfernee.xu@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> Here's my situation, I have 2 Yarn clusters(A and B), the provisioning
>>> process is straightforward for A, it will have HDFS, Yarn, and MR.  NN and
>>> RM is running on master machines of cluster A, and DataNode and NodeManager
>>> is running on slave machines as usual. But the special requirement comes
>>> from cluster B, in cluster B I only run Yarn components(RM and NM) and
>>> having access to HDFS provisioned in cluster A(like a HDFS client). Without
>>> Ambari, I could copy core-site.xml/hdfs-site.xml from A to B, so is it
>>> possible to do it in Ambari? and how?
>>>
>>> --
>>> --Anfernee
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>
>
> --
> --Anfernee
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
View raw message