ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aman poonia <aman.poonia...@gmail.com>
Subject Re: How to install and start apache distributed hadoop rather than hortonworks distribution
Date Thu, 13 Oct 2016 06:54:03 GMT
Hey Alejandro,

Thank you very much for pointing me to the right source code. I will see
what can i figure out of this. :-)

-- 
*With Regards,*
*Aman Poonia*

On Tue, Oct 11, 2016 at 12:28 AM, Alejandro Fernandez <
afernandez@hortonworks.com> wrote:

> I think that requirement is based on the fact that Ambari needs to be able
> to compare version numbers.
> Typically, each service's metainfo.xml file defines how to performs a yum
> install of its packages, which can replace variables like the
> stack_version, and insert *
>
> E.g.,
>
> <osSpecifics>
>   <osSpecific>
>     <osFamily>any</osFamily>
>     <packages>
>       <package>
>         <name>hbase</name>
>       </package>
>     </packages>
>   </osSpecific>
> </osSpecifics>
>
> Or
>
>
> <osSpecific>
>   <osFamily>redhat7,amazon2015,redhat6,suse11,suse12</osFamily>
>   <packages>
>     <package>
>       <name>atlas-metadata_${stack_version}</name>
>     </package>
>     <package>
>       <name>ambari-infra-solr-client</name>
>       <condition>should_install_infra_solr_client</condition>
>     </package>
>     <package>
>       <name>kafka_${stack_version}</name>
>     </package>
>   </packages>
> </osSpecific>
>
>
> However, you may have to change several other python functions if you want
> package names that don't conform to that standard, or at least look at what
> these do
>
> ambari-common/src/main/python/resource_management/libraries/
> functions/conf_select.py
> ambari-common/src/main/python/resource_management/libraries/
> functions/stack_select.py
> ambari-common/src/main/python/resource_management/libraries/
> functions/version.py
> ambari-server/src/main/resources/custom_actions/
> scripts/install_packages.py
>
> Thanks,
> Alejandro
>
> From: aman poonia <aman.poonia.29@gmail.com>
> Date: Saturday, October 8, 2016 at 2:28 AM
> To: Alejandro Fernandez <afernandez@hortonworks.com>
>
> Subject: Re: How to install and start apache distributed hadoop rather
> than hortonworks distribution
>
> Hi Alejandro,
>
> I downloaded Bigtop and created the zookeeper and Hadoop rpm from apache
> provided tarballs. And now i am trying to use these rpm instead of
> hortonworks to deploy a hadoop cluster. And i am facing difficulty in this.
> As ambari searches for a specific names like
> "yum install hadoop_x_x_x_x-xxxx"
> "yum install hadoop_x_x_x_x-xxxx-hdfs"
> and so on.
>
> How can i make it work with my own generated RPMs.
>
>
> --
> *With Regards:-*
> * Aman Poonia*
>
> On Fri, Oct 7, 2016 at 11:39 PM, Alejandro Fernandez <
> afernandez@hortonworks.com> wrote:
>
>> Hi Aman,
>>
>> Making your own distribution is no easy task. You can literally spend
>> months trying to do this since it requires
>>
>> tooling (like the equivalent of conf-select and hdp-select to change
>> symlinks)
>> packaging of Hadoop into RPMs (or equivalent for other Oses)
>> finding compatible versions of each product
>> providing default configs based on those versions
>> your own stack advisor
>> handling configs during stack upgrade (rolling/express)
>> etc.
>>
>> What exactly are you trying to accomplish?
>>
>> Thanks,
>> Alejandro
>>
>> From: aman poonia <aman.poonia.29@gmail.com>
>> Date: Friday, October 7, 2016 at 4:53 AM
>> To: Alejandro Fernandez <afernandez@hortonworks.com>
>> Cc: "user@ambari.apache.org" <user@ambari.apache.org>
>> Subject: Re: How to install and start apache distributed hadoop rather
>> than hortonworks distribution
>>
>> So essentially if i want to use apache distribution i need to define my
>> own stack? Can't i just change some configuration so that it starts working
>> with apache distribution.
>>
>> What i understood from documentation and code is to write a stack one
>> needs to provide his own replacement of "hdp-select" and "conf-select"
>> and couldnot find documentation around what is expected from these
>> tools(like what all functions one need to implement) so it looks like a
>> dark area to me.
>>
>> A did a quick grep to see if there is something around version number of
>> stack and found this in ambari-commons
>>
>> *ambari-common/src/main/python/resource_management/libraries/functions/stack_select.py:
>>    match = re.match('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)*
>> *ambari-common/src/main/python/resource_management/libraries/functions/get_stack_version.py:
>>  match = re.findall('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+',
>> home_dir_split[iSubdir])*
>> *ambari-common/src/main/python/resource_management/libraries/functions/get_stack_version.py:
>>  match = re.match('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)*
>>
>> Looks like there is some rule around the naming of rpm packages and stack
>> naming which i am completely missing!!
>>
>>
>>
>> --
>> *With Regards:-*
>> * Aman Poonia*
>>
>> On Wed, Oct 5, 2016 at 11:11 PM, Alejandro Fernandez <
>> afernandez@hortonworks.com> wrote:
>>
>>> Hi Aman,
>>>
>>> Ambari is meant to work with any distribution, as long as it has a stack
>>> definition, which includes list of services, RPM names, etc. For example,
>>> https://github.com/apache/ambari/tree/trunk/ambari-
>>> server/src/main/resources/stacks
>>> Are you trying to build your own stack?
>>>
>>> Thanks,
>>> Alejandro
>>>
>>> From: aman poonia <aman.poonia.29@gmail.com>
>>> Reply-To: "user@ambari.apache.org" <user@ambari.apache.org>
>>> Date: Wednesday, October 5, 2016 at 3:10 AM
>>> To: "user@ambari.apache.org" <user@ambari.apache.org>
>>> Subject: How to install and start apache distributed hadoop rather than
>>> hortonworks distribution
>>>
>>> I am new to Ambari and have been trying setting up a cluster.
>>> Amabri looks interesting to use.
>>>
>>> However, i am having a tough time to understand how to install and start
>>> apache distributed Hadoop rather than Hortonworks distributed Hadoop using
>>> Ambari. Is there a documentation i can refer to.
>>> There are instances when i don't want to use Hortonworks distribution
>>> and want to use apache distributed Hadoop.
>>> Also need some help in understanding naming convention of rpm packages
>>> that Ambari expects. Have i missed something in the documentation?
>>>
>>>
>>> --
>>> *With Regards,*
>>> * Aman Poonia*
>>>
>>
>>
>

Mime
View raw message