Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 1EF86200B9D for ; Thu, 13 Oct 2016 08:55:08 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 1D96B160AE4; Thu, 13 Oct 2016 06:55:08 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id AEE23160AE3 for ; Thu, 13 Oct 2016 08:55:06 +0200 (CEST) Received: (qmail 14507 invoked by uid 500); 13 Oct 2016 06:55:05 -0000 Mailing-List: contact user-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@ambari.apache.org Delivered-To: mailing list user@ambari.apache.org Received: (qmail 14493 invoked by uid 99); 13 Oct 2016 06:55:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Oct 2016 06:55:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 25A141A0634 for ; Thu, 13 Oct 2016 06:55:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.929 X-Spam-Level: * X-Spam-Status: No, score=1.929 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id Ar0oxaBIYvYe for ; Thu, 13 Oct 2016 06:55:02 +0000 (UTC) Received: from mail-vk0-f52.google.com (mail-vk0-f52.google.com [209.85.213.52]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 197775F642 for ; Thu, 13 Oct 2016 06:55:02 +0000 (UTC) Received: by mail-vk0-f52.google.com with SMTP id 83so44726945vkd.0 for ; Wed, 12 Oct 2016 23:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=wL4MegxPDvOJLr3hRTNCM6Wxy39mmusWK6Lavoh0Vkk=; b=w/SuX0utLY4L+4zP7xjZpffWE2roYAoXWxA32Cc3QGsk/l5NSMIZs2is2KXY9xOvvB /Y+nUMeFX9Y9CUnGx97rY3sfrxeIE8cDDhhsrGmHizT+jIa1Ps2VVvj1AycYp0Y+pmki I616n7QzFZxPYZuR1FohEs+R7bTDI8E+i2HD+Ht8EwNtl8/h/T3YDGEahRk1SH9tJw8a 7IcWelgZF0VdXywFpI1cSNHCOo1R1RwDGfiE8A2k6MuA30FfHjmM1/TEmHUUz0etthjn U8KT0+y9RNVbtYjSlfv+7CNuajTCV7uOmwVf2iYPoTTA+mQRUOXpH89C63WqXwBuwuY6 t2nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=wL4MegxPDvOJLr3hRTNCM6Wxy39mmusWK6Lavoh0Vkk=; b=IxqSR/q3yOH2M7zavt/zI0hVp+vBuIZKEw0q7sTvhgcqnge2lmU6R4WVbUkjNZZqsV Q/qD8NcahE72owMkM2qaFNGLUYbi4mYGqjfmWK8ldPJWigIdUW0eU1g7MxJl2asUEFX9 tx+fkkb2p+Yym5JMu2zlqHdb0oNxT1Oa8xJgdXWKQrMugO7ltxPces3iGn6h73EqQPv1 ULeemkqDzfZqG4v/g4QU6dccQrnOAFrpMiaar0IQCUshmPn4ir6N2pfHK+2iOzfkPxqH dI8UkC1YJIMY2DmQh9j4S2ACq/tKG1FWtSQ7O7EqM+7B8k6msIH2wGWucgJOoM+qs28L RULw== X-Gm-Message-State: AA6/9Rl6mubXo0y3sRu8OvcayJKTm8Kepv5PfxLjDe/v2oKT1tQEbsdNnkTURIPJnkTvmkC/Tu5tjJgdsZLgzA== X-Received: by 10.31.63.140 with SMTP id m134mr3401605vka.168.1476341683951; Wed, 12 Oct 2016 23:54:43 -0700 (PDT) MIME-Version: 1.0 Received: by 10.103.104.212 with HTTP; Wed, 12 Oct 2016 23:54:03 -0700 (PDT) In-Reply-To: References: From: aman poonia Date: Thu, 13 Oct 2016 12:24:03 +0530 Message-ID: Subject: Re: How to install and start apache distributed hadoop rather than hortonworks distribution To: Alejandro Fernandez , user@ambari.apache.org Content-Type: multipart/alternative; boundary=001a114c9a8ea39e94053eb99570 archived-at: Thu, 13 Oct 2016 06:55:08 -0000 --001a114c9a8ea39e94053eb99570 Content-Type: text/plain; charset=UTF-8 Hey Alejandro, Thank you very much for pointing me to the right source code. I will see what can i figure out of this. :-) -- *With Regards,* *Aman Poonia* On Tue, Oct 11, 2016 at 12:28 AM, Alejandro Fernandez < afernandez@hortonworks.com> wrote: > I think that requirement is based on the fact that Ambari needs to be able > to compare version numbers. > Typically, each service's metainfo.xml file defines how to performs a yum > install of its packages, which can replace variables like the > stack_version, and insert * > > E.g., > > > > any > > > hbase > > > > > > Or > > > > redhat7,amazon2015,redhat6,suse11,suse12 > > > atlas-metadata_${stack_version} > > > ambari-infra-solr-client > should_install_infra_solr_client > > > kafka_${stack_version} > > > > > > However, you may have to change several other python functions if you want > package names that don't conform to that standard, or at least look at what > these do > > ambari-common/src/main/python/resource_management/libraries/ > functions/conf_select.py > ambari-common/src/main/python/resource_management/libraries/ > functions/stack_select.py > ambari-common/src/main/python/resource_management/libraries/ > functions/version.py > ambari-server/src/main/resources/custom_actions/ > scripts/install_packages.py > > Thanks, > Alejandro > > From: aman poonia > Date: Saturday, October 8, 2016 at 2:28 AM > To: Alejandro Fernandez > > Subject: Re: How to install and start apache distributed hadoop rather > than hortonworks distribution > > Hi Alejandro, > > I downloaded Bigtop and created the zookeeper and Hadoop rpm from apache > provided tarballs. And now i am trying to use these rpm instead of > hortonworks to deploy a hadoop cluster. And i am facing difficulty in this. > As ambari searches for a specific names like > "yum install hadoop_x_x_x_x-xxxx" > "yum install hadoop_x_x_x_x-xxxx-hdfs" > and so on. > > How can i make it work with my own generated RPMs. > > > -- > *With Regards:-* > * Aman Poonia* > > On Fri, Oct 7, 2016 at 11:39 PM, Alejandro Fernandez < > afernandez@hortonworks.com> wrote: > >> Hi Aman, >> >> Making your own distribution is no easy task. You can literally spend >> months trying to do this since it requires >> >> tooling (like the equivalent of conf-select and hdp-select to change >> symlinks) >> packaging of Hadoop into RPMs (or equivalent for other Oses) >> finding compatible versions of each product >> providing default configs based on those versions >> your own stack advisor >> handling configs during stack upgrade (rolling/express) >> etc. >> >> What exactly are you trying to accomplish? >> >> Thanks, >> Alejandro >> >> From: aman poonia >> Date: Friday, October 7, 2016 at 4:53 AM >> To: Alejandro Fernandez >> Cc: "user@ambari.apache.org" >> Subject: Re: How to install and start apache distributed hadoop rather >> than hortonworks distribution >> >> So essentially if i want to use apache distribution i need to define my >> own stack? Can't i just change some configuration so that it starts working >> with apache distribution. >> >> What i understood from documentation and code is to write a stack one >> needs to provide his own replacement of "hdp-select" and "conf-select" >> and couldnot find documentation around what is expected from these >> tools(like what all functions one need to implement) so it looks like a >> dark area to me. >> >> A did a quick grep to see if there is something around version number of >> stack and found this in ambari-commons >> >> *ambari-common/src/main/python/resource_management/libraries/functions/stack_select.py: >> match = re.match('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)* >> *ambari-common/src/main/python/resource_management/libraries/functions/get_stack_version.py: >> match = re.findall('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', >> home_dir_split[iSubdir])* >> *ambari-common/src/main/python/resource_management/libraries/functions/get_stack_version.py: >> match = re.match('[0-9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)* >> >> Looks like there is some rule around the naming of rpm packages and stack >> naming which i am completely missing!! >> >> >> >> -- >> *With Regards:-* >> * Aman Poonia* >> >> On Wed, Oct 5, 2016 at 11:11 PM, Alejandro Fernandez < >> afernandez@hortonworks.com> wrote: >> >>> Hi Aman, >>> >>> Ambari is meant to work with any distribution, as long as it has a stack >>> definition, which includes list of services, RPM names, etc. For example, >>> https://github.com/apache/ambari/tree/trunk/ambari- >>> server/src/main/resources/stacks >>> Are you trying to build your own stack? >>> >>> Thanks, >>> Alejandro >>> >>> From: aman poonia >>> Reply-To: "user@ambari.apache.org" >>> Date: Wednesday, October 5, 2016 at 3:10 AM >>> To: "user@ambari.apache.org" >>> Subject: How to install and start apache distributed hadoop rather than >>> hortonworks distribution >>> >>> I am new to Ambari and have been trying setting up a cluster. >>> Amabri looks interesting to use. >>> >>> However, i am having a tough time to understand how to install and start >>> apache distributed Hadoop rather than Hortonworks distributed Hadoop using >>> Ambari. Is there a documentation i can refer to. >>> There are instances when i don't want to use Hortonworks distribution >>> and want to use apache distributed Hadoop. >>> Also need some help in understanding naming convention of rpm packages >>> that Ambari expects. Have i missed something in the documentation? >>> >>> >>> -- >>> *With Regards,* >>> * Aman Poonia* >>> >> >> > --001a114c9a8ea39e94053eb99570 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hey=C2=A0Alejandro,

Thank you ver= y much for pointing me to the right source code. I will see what can i figu= re out of this. :-)

=
--
With Regards= ,
Aman Poonia


On Tue, Oct 11, 2016 at 12:28 AM, Alejandro = Fernandez <afernandez@hortonworks.com> wrote:
I think that requirement is based on the fact that Ambari needs to be = able to compare version numbers.
Typically, each service's metainfo.xml file defines how to perform= s a yum install of its packages, which can replace variables like the stack= _version, and insert *

E.g.,
<osSpecifics>
<osSpecific>
&l= t;osFamily>an= y</osFamily>
<packages>
<
package>
= <name>hbase</name&g= t;
</packag= e>
</packages>
</osSpecific&= gt;
</osSpecifics= >
Or

<osSpecific>
<osFamily>redhat7= ,amazon2015,redhat6,suse11,suse12</osFamily&= gt;
<packages>
<package>
<name>atlas-metadata_${stack_version}</name>
</package= >
<package>
<name>amb= ari-infra-solr-client</name>
<= span style=3D"background-color:#efefef"><
condition>should_install_infra_solr_cli= ent</condition>
</package>
<package>
= <name>kafka_${stack_version}</name>
</package><= /span>
</packages>
</osSpecific>

However, you may have to change several other python functions if you = want package names that don't conform to that standard, or at least loo= k at what these do

ambari-common/src/main/python/resource_management/libraries/= functions/conf_select.py
ambari-common/src/main/python/resource_management/libraries/= functions/stack_select.py
ambari-common/src/main/python/resource_management/libraries/= functions/version.py
ambari-server/src/main/resources/custom_actions/scripts/inst= all_packages.py

Thanks,
Alejandro

From: aman poonia <aman.poonia.29@gmail.com>
Date: Saturday, October 8, 2016 at = 2:28 AM
To: Alejandro Fernandez <
afernandez@hortonw= orks.com>

Subject: Re: How to install and sta= rt apache distributed hadoop rather than hortonworks distribution

Hi Alejandro,

I downloaded Bigtop and created the zookeeper and Hadoop=C2=A0rpm from= apache provided tarballs. And now i am trying to use these rpm=C2=A0instea= d of hortonworks to deploy a hadoop cluster. And i am facing difficulty in = this. As ambari searches for a specific names like
"yum install hadoop_x_x_x_x-xxxx"
"yum install hadoop_x_x_x_x-xxxx-hdfs"
and so on.=C2=A0

How can i make it work with my own generated RPMs.


--
With Regards:-
Aman Poonia


On Fri, Oct 7, 2016 at 11:39 PM, Alejandro Ferna= ndez <afernan= dez@hortonworks.com> wrote:
Hi Aman,

Making your own distribution is no easy task. You can literally spend = months trying to do this since it requires=C2=A0

tooling (like the equivalent of conf-select and hdp-select to change s= ymlinks)
packaging of Hadoop into RPMs (or equivalent for other Oses)
finding compatible versions of each product
providing default configs based on those versions
your own stack advisor
handling configs during stack upgrade (rolling/express)
etc.

What exactly are you trying to accomplish?

Thanks,
Alejandro

From: aman poonia <aman.poonia.29@gmail.com>
Date: Friday, October 7, 2016 at 4:= 53 AM
To: Alejandro Fernandez <
afernandez@hortonw= orks.com>
Cc: "user@ambari.apache.org" <user@ambari.apache= .org>
Subject: Re: How to install and sta= rt apache distributed hadoop rather than hortonworks distribution

So essentially if i want to use apache distribution i need= to define my own stack? Can't i just change some configuration so that= it starts working with apache distribution.=C2=A0

What i understood from documentation and code is to write a stack one = needs to provide his own replacement of "hdp-select" and "co= nf-select" and=C2=A0couldnot=C2=A0find documentation around what is ex= pected from these tools(like what all functions one need to implement) so it looks like a dark area to me.

A did a quick grep to see if there is something around version number = of stack and found this in ambari-commons

ambari-common/src/main/python/resource_management/libraries/functions/stack_select.py: =C2=A0 =C2=A0match =3D re.match('[0-9]+.[= 0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)
ambari-common/src/main/python/resource_management/libraries/functions/get_stack_version.py: =C2=A0match =3D re.findall('[0-= 9]+.[0-9]+.[0-9]+.[0-9]+-[0-9]+', home_dir_split[iSubdir])
ambari-common/src/main/python/resource_management/libraries/functions/get_stack_version.py: =C2=A0match =3D re.match('[0-9]= +.[0-9]+.[0-9]+.[0-9]+-[0-9]+', stack_version)

Looks like there is some rule around the naming of rpm packages and st= ack naming which i am completely missing!!



--
With Regards:-
Aman Poonia


On Wed, Oct 5, 2016 at 11:11 PM, Alejandro Ferna= ndez <afernan= dez@hortonworks.com> wrote:
Hi Aman,

Ambari is meant to work with any distribution, as long as it has a sta= ck definition, which includes list of services, RPM names, etc. For example= ,=C2=A0https://github.com/apach= e/ambari/tree/trunk/ambari-server/src/main/resources/stacks
Are you trying to build your own stack?

Thanks,
Alejandro

From: aman poonia <aman.poonia.29@gmail.com>
Reply-To: "
user@ambari.apache.org" &= lt;user@ambari.= apache.org>
Date: Wednesday, October 5, 2016 at= 3:10 AM
To: "user@ambari.apache.org" <user@ambari.apache= .org>
Subject: How to install and start a= pache distributed hadoop rather than hortonworks distribution

I am new to Ambari=C2=A0and have been trying setting up a cluster. Ama= bri=C2=A0looks interesting=C2=A0to use.=C2=A0

However, i am having a=C2=A0tough time to understand how to install an= d start apache distributed Hadoop rather than Hortonworks distributed Hadoo= p using Ambari. Is there a documentation i can refer to.
There are instances when i don't want to use Hortonworks distribut= ion and want to use apache distributed Hadoop.
Also need some help in understanding naming convention of rpm packages= that Ambari=C2=A0expects. Have i missed something in the=C2=A0documentatio= n?=C2=A0


--
With Regards,
Aman Poonia




--001a114c9a8ea39e94053eb99570--