Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 85725200BF6 for ; Tue, 10 Jan 2017 23:14:12 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 83D21160B2C; Tue, 10 Jan 2017 22:14:12 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 3159E160B3D for ; Tue, 10 Jan 2017 23:14:10 +0100 (CET) Received: (qmail 2010 invoked by uid 500); 10 Jan 2017 22:14:09 -0000 Mailing-List: contact commits-help@trafodion.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: commits@trafodion.apache.org Delivered-To: mailing list commits@trafodion.apache.org Received: (qmail 2001 invoked by uid 99); 10 Jan 2017 22:14:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Jan 2017 22:14:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id C3823C043B for ; Tue, 10 Jan 2017 22:14:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -6.218 X-Spam-Level: X-Spam-Status: No, score=-6.218 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-2.999, URIBL_BLOCKED=0.001] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id LG2e15ytReMZ for ; Tue, 10 Jan 2017 22:13:55 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id 8CDD35FAC3 for ; Tue, 10 Jan 2017 22:13:52 +0000 (UTC) Received: (qmail 202 invoked by uid 99); 10 Jan 2017 22:13:51 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 10 Jan 2017 22:13:51 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 95E7CDFB7D; Tue, 10 Jan 2017 22:13:51 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: svarnau@apache.org To: commits@trafodion.incubator.apache.org Date: Tue, 10 Jan 2017 22:13:52 -0000 Message-Id: <2e32820f214f4b5fbce91ef421487f76@git.apache.org> In-Reply-To: <9343b70fdc6645918370e570b67b7af0@git.apache.org> References: <9343b70fdc6645918370e570b67b7af0@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [2/7] incubator-trafodion git commit: [TRAFODION-2291] Ambari integration archived-at: Tue, 10 Jan 2017 22:14:12 -0000 [TRAFODION-2291] Ambari integration Integrates trafodion with Ambari cluster manager via an mpack (management pack) plug-in. As described in install/README.md file, to leverage this feature, two RPMs are built. One to install the mpack and one used by Amari to install trafodion itself. The mpack plug-in is built with a URL that points to where the trafodion RPM is hosted (a yum repo). It can be specified on the make command line as REPO_URL=... Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/d8a150c0 Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/d8a150c0 Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/d8a150c0 Branch: refs/heads/master Commit: d8a150c07611322503f1b4f678eca1d26feaba33 Parents: 403f9f7 Author: Steve Varnau Authored: Wed Jan 4 19:15:00 2017 +0000 Committer: Steve Varnau Committed: Wed Jan 4 19:33:23 2017 +0000 ---------------------------------------------------------------------- .rat-excludes | 4 + RAT_README | 9 + install/.gitignore | 4 + install/Makefile | 8 +- install/README.md | 75 +++++ install/ambari-installer/Makefile | 79 ++++++ .../mpack-install/am_install.sh | 47 ++++ install/ambari-installer/repo.template | 24 ++ .../TRAFODION/2.1/configuration/dcs-env.xml | 150 ++++++++++ .../TRAFODION/2.1/configuration/dcs-log4j.xml | 107 ++++++++ .../TRAFODION/2.1/configuration/dcs-site.xml | 271 +++++++++++++++++++ .../TRAFODION/2.1/configuration/rest-site.xml | 157 +++++++++++ .../2.1/configuration/traf-cluster-env.xml | 41 +++ .../2.1/configuration/trafodion-env.xml | 252 +++++++++++++++++ .../common-services/TRAFODION/2.1/metainfo.xml | 165 +++++++++++ .../TRAFODION/2.1/package/scripts/params.py | 95 +++++++ .../2.1/package/scripts/status_params.py | 23 ++ .../2.1/package/scripts/trafodiondcs.py | 60 ++++ .../2.1/package/scripts/trafodionmaster.py | 186 +++++++++++++ .../2.1/package/scripts/trafodionnode.py | 221 +++++++++++++++ .../TRAFODION/2.1/role_command_order.json | 19 ++ .../TRAFODION/2.1/service_advisor.py | 256 ++++++++++++++++++ .../TRAFODION/2.1/themes/theme.json | 192 +++++++++++++ .../custom-services/TRAFODION/2.1/metainfo.xml | 34 +++ install/ambari-installer/traf-mpack/mpack.json | 46 ++++ install/ambari-installer/traf_ambari.spec | 64 +++++ 26 files changed, 2588 insertions(+), 1 deletion(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/.rat-excludes ---------------------------------------------------------------------- diff --git a/.rat-excludes b/.rat-excludes index 2e7ded9..ff8b481 100644 --- a/.rat-excludes +++ b/.rat-excludes @@ -171,3 +171,7 @@ mod_cfgs.json prompt.json script.json version.json +# ambari configuration JSON files +mpack.json +role_command_order.json +theme.json http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/RAT_README ---------------------------------------------------------------------- diff --git a/RAT_README b/RAT_README index f1b9d8d..66a448f 100644 --- a/RAT_README +++ b/RAT_README @@ -190,3 +190,12 @@ tests/phx/src/test/java/org/trafodion/phoenix/end2end/UpsertSelectTest.java tests/phx/src/test/java/org/trafodion/phoenix/end2end/UpsertValuesTest.java tests/phx/src/test/java/org/trafodion/phoenix/end2end/VariableLengthPKTest.java +------------------------------------------------------------------------------- + +JSON config/data file does not have comment convention. +Ambari integration files are in installer/ambari-installer/... +Ambari mandates json format for: + +mpack.json +role_command_order.json +theme.json http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/.gitignore ---------------------------------------------------------------------- diff --git a/install/.gitignore b/install/.gitignore index cba5831..5a3dadb 100644 --- a/install/.gitignore +++ b/install/.gitignore @@ -2,3 +2,7 @@ installer-*.tar.gz LICENSE NOTICE DISCLAIMER +ambari-installer/RPMROOT +ambari-installer/traf-mpack.tar.gz +ambari-installer/mpack-install/repo +ambari-installer/traf-mpack/custom-services/TRAFODION/*/repos http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/Makefile ---------------------------------------------------------------------- diff --git a/install/Makefile b/install/Makefile index 249f27f..9abc6cd 100644 --- a/install/Makefile +++ b/install/Makefile @@ -18,7 +18,11 @@ RELEASE_VER ?= ${TRAFODION_VER}-incubating RELEASE_TYPE ?= $(shell echo $(TRAFODION_VER_PROD)| sed -e 's/ /-/g') INSTALLER_TARNAME = $(shell echo ${RELEASE_TYPE}_installer-${RELEASE_VER}.tar.gz |tr '[A-Z]' '[a-z]') -all: pkg-installer +all: pkg-installer pkg-ambari + +pkg-ambari: + cd ambari-installer && $(MAKE) all + pkg-installer: installer/LICENSE installer/NOTICE installer/DISCLAIMER tar czf ${INSTALLER_TARNAME} installer --exclude=tools @@ -42,3 +46,5 @@ version: clean: rm -f ${INSTALLER_TARNAME} + cd ambari-installer && $(MAKE) clean + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/README.md ---------------------------------------------------------------------- diff --git a/install/README.md b/install/README.md new file mode 100644 index 0000000..2dcf9f8 --- /dev/null +++ b/install/README.md @@ -0,0 +1,75 @@ + + +## Trafodion Installers + +* **install** - This is the current command-line installer. It installs a server tarball + on an existing CDH or HDP Hadoop cluster. +* **python-installer** - This is the new command-line installer, meant to replace current + command-line installer. Likewise, installs server tarball on existing CDH, HDP, + or APACHE cluster. +* **ambari-installer** - This integrates with Ambari cluster manager, so only applies to HDP. + In this case, trafodion server is installed via RPM. This is installed on Ambari server as + a management pack. Trafodion can be included in the initial cluster creation or added later. + +## Ambari Integration + +The Ambari MPack (management pack) is also packaged as an RPM, having a dependency on ambari-server. +Given a proper yum repo file, `traf_ambari` rpm +can be installed directly and it pulls in ambari-server. +If ambari-server is previously installed and running, it must be restarted to pick up the Trafodion +management pack. + +#### Packaging + +Part of Ambari's job is to set up yum repo files on each node in order to install packages. +The default URLs are for Hortonworks' public repos. But since your custom-built Trafodion is +not hosted there, you need to specify a URL for your local yum repo server. To build that into +the `traf_ambari` package, use make to specify value of `REPO_URL`. + + `make package REPO_URL=http://my.repo.server/repo/...` + +This can be done either in the install directory or from a top-level "make package-all". +This is not necessarily the URL where `traf_ambari` is hosted, but rather where the +`apache-trafodion_server` is hosted. + +#### Hosting RPM Repo + +Once you build the RPM packages, you need to copy them to a web server accessible location and +then use createrepo command to set up the yum meta-data. (sudo yum install createrepo) + +#### Source Files + +The code for the ambari mpack is here in the install tree, but files that are distributed to each +node are part of the trafodion server RPM, and are located in core/sqf/sysinstall + +#### Trafodion Environment Variables + +The trafodion user environment is set using ~trafodion/.bashrc, which sources in values set by the RPM +installation, values set by the Ambari install, and values from the installed Trafodion software. + +* `/etc/trafodion/trafodion_config` - RPM sets the `TRAF_HOME` value, which is location of Trafodion installation. +* `/etc/trafodion/conf/trafodion-env.sh` - user-specified values set by Ambari trafodion-node install step. +* `/etc/trafodion/conf/traf-cluster-env.sh` - node list info set by Ambari trafodion-master install step. +* `/home/trafodion/.../sqenv.sh` - various derived values. + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/Makefile ---------------------------------------------------------------------- diff --git a/install/ambari-installer/Makefile b/install/ambari-installer/Makefile new file mode 100644 index 0000000..2bb660d --- /dev/null +++ b/install/ambari-installer/Makefile @@ -0,0 +1,79 @@ +# @@@ START COPYRIGHT @@@ +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# @@@ END COPYRIGHT @@@ + + +# Makefile to build trafodion management pack for Ambari + +RELEASE ?= 1 +REPO_URL ?= http://no.such.server.org/repo + +SPECFILE = traf_ambari.spec + +RPMROOT=$(PWD)/RPMROOT +RPMDIR=$(RPMROOT)/RPMS +SRPMDIR=$(RPMROOT)/SRPMS +SOURCEDIR=$(RPMROOT)/SOURCES +BUILDDIR=$(RPMROOT)/BUILD +BUILDROOTDIR=$(RPMROOT)/BUILDROOT + +all: rpmbuild + +# need repoinfo file per release +# traf_ambari needs to support multiple releases so ambari can select +# correct trafodion version for a given HDP stack +REPO_VER= 2.1 + +$(SOURCEDIR)/ambari_rpm.tar.gz: mpack-install/LICENSE repofiles + rm -rf $(RPMROOT) + mkdir -p $(SOURCEDIR) + tar czf $@ traf-mpack mpack-install + +# To do: needs enhancement when supporting RH6 & RH7 os +repofiles: + for v in $(REPO_VER); do \ + rdir=traf-mpack/custom-services/TRAFODION/$${v}/repos; \ + mkdir -p $${rdir} ; \ + sed -e "s#REPLACE_WITH_RH6_REPO_URL#$(REPO_URL)/RH6/$${v}#" repo.template > $${rdir}/repoinfo.xml ; \ + echo $${rdir}/repoinfo.xml ; \ + done + +mpack-install/LICENSE: ../../licenses/LICENSE-install + cp -f $? $@ + +../../licenses/LICENSE-install: + cd $(@D) && $(MAKE) $(@F) + +rpmbuild: $(SOURCEDIR)/ambari_rpm.tar.gz + mkdir -p $(RPMDIR) + mkdir -p $(BUILDDIR) + mkdir -p $(BUILDROOTDIR) + mkdir -p $(SRPMDIR) + rpmbuild -vv -bb \ + --define "version $(TRAFODION_VER)" \ + --define "release $(RELEASE)" \ + --define "_builddir $(BUILDDIR)" \ + --define "_buildrootdir $(BUILDROOTDIR)" \ + --define "_sourcedir $(SOURCEDIR)" \ + --define "_rpmdir $(RPMDIR)" \ + --define "_topdir $(RPMROOT)" \ + $(SPECFILE) + mkdir -p ../../distribution + mv -f $(RPMROOT)/RPMS/noarch/traf_ambari*.rpm ../../distribution/ + +clean: + rm -rf $(RPMROOT) + rm -rf mpack-install/LICENSE + rm -rf traf-mpack/custom-services/TRAFODION/*/repos http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/mpack-install/am_install.sh ---------------------------------------------------------------------- diff --git a/install/ambari-installer/mpack-install/am_install.sh b/install/ambari-installer/mpack-install/am_install.sh new file mode 100755 index 0000000..43dae96 --- /dev/null +++ b/install/ambari-installer/mpack-install/am_install.sh @@ -0,0 +1,47 @@ +#!/bin/bash +# @@@ START COPYRIGHT @@@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# @@@ END COPYRIGHT @@@ + +# generate new unique key local to ambari server +tf=/tmp/trafssh.$$ +rm -f ${tf}* +/usr/bin/ssh-keygen -q -t rsa -N '' -f $tf + +instloc="$1" + +config="${instloc}/traf-mpack/common-services/TRAFODION/2.1/configuration/trafodion-env.xml" + +chmod 0600 $config # protect key +sed -i -e "/TRAFODION-GENERATED-SSH-KEY/r $tf" $config # add key to config properties + +rm -f ${tf}* + +# tar up the mpack, included generated key +tball="${instloc}/traf-mpack.tar.gz" + +cd "${instloc}" +tar czf "$tball" traf-mpack + +# install ambari mpack +ambari-server install-mpack --verbose --mpack="$tball" +ret=$? + +exit $ret http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/repo.template ---------------------------------------------------------------------- diff --git a/install/ambari-installer/repo.template b/install/ambari-installer/repo.template new file mode 100644 index 0000000..8a3ebfd --- /dev/null +++ b/install/ambari-installer/repo.template @@ -0,0 +1,24 @@ + + + + + + REPLACE_WITH_RH6_REPO_URL + Trafodion + Trafodion + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-env.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-env.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-env.xml new file mode 100644 index 0000000..5f65eac --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-env.xml @@ -0,0 +1,150 @@ + + + + + + + + + + dcs.servers + Client Connection Servers + 16 + The cluster-wide total number of concurrent DCS servers + + int + 0 + 256 + Servers + 4 + + + + content + dcs-env template + Template for dcs-env.sh file + +# This script sets variables multiple times over the course of starting an dcs process, +# so try to keep things idempotent unless you want to take an even deeper look +# into the startup scripts (bin/dcs, etc.) + +# The java implementation to use. Java 1.7 required. +# export JAVA_HOME=/usr/java/jdk1.7.0/ +export JAVA_HOME={{java_home}} + +# Add Trafodion to the classpath +if [[ "$TRAF_HOME" != "" ]]; then + if [[ -d $TRAF_HOME ]]; then + export DCS_CLASSPATH=${CLASSPATH}: + fi +fi + +# Extra Java CLASSPATH elements. Optional. +# export DCS_CLASSPATH=${DCS_CLASSPATH}: + +# The maximum amount of heap to use, in MB. Default is 128. +# export DCS_HEAPSIZE=128 + +# Extra Java runtime options. +# Below are what we set by default. May only work with SUN JVM. +# For more on why as well as other possible settings, +# see http://wiki.apache.org/hadoop/PerformanceTuning +export DCS_OPTS="-XX:+UseConcMarkSweepGC" + +# Uncomment below to enable java garbage collection logging for the server-side processes +# this enables basic gc logging for the server processes to the .out file +# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution $DCS_GC_OPTS" + +# this enables gc logging using automatic GC log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+. Either use this set of options or the one above +# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M $DCS_GC_OPTS" + +# Uncomment below to enable java garbage collection logging for the client processes in the .out file. +# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps $DCS_GC_OPTS" + +# Uncomment below (along with above GC logging) to put GC information in its own logfile (will set DCS_GC_OPTS). +# This applies to both the server and client GC options above +# export DCS_USE_GC_LOGFILE=true + +# Uncomment below if you intend to use the EXPERIMENTAL off heap cache. +# export DCS_OPTS="$DCS_OPTS -XX:MaxDirectMemorySize=" +# Set dcs.offheapcache.percentage in dcs-site.xml to a nonzero value. + +# Uncomment and adjust to enable JMX exporting +# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access. +# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html +# export DCS_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false" +# export DCS_MASTER_OPTS="$DCS_MASTER_OPTS $DCS_JMX_BASE -Dcom.sun.management.jmxremote.port=10101" +# export DCS_SERVER_OPTS="$DCS_SERVER_OPTS $DCS_JMX_BASE -Dcom.sun.management.jmxremote.port=10102" +# export DCS_REST_OPTS="$DCS_REST_OPTS $DCS_JMX_BASE -Dcom.sun.management.jmxremote.port=10103" +# export DCS_ZOOKEEPER_OPTS="$DCS_ZOOKEEPER_OPTS $DCS_JMX_BASE -Dcom.sun.management.jmxremote.port=10104" + +# File naming host on which DCS Primary Master is configured to run. $DCS_HOME/conf/master by default. +# export DCS_PRIMARY_MASTER=${DCS_HOME}/conf/master + +# File naming hosts on which DCS Backup Masters is configured to run. $DCS_HOME/conf/backup-masters by default. +# export DCS_BACKUP_MASTERS=${DCS_HOME}/conf/backup-masters + +# File naming hosts on which DCS Servers will run. $DCS_HOME/conf/servers by default. +# export DCS_SERVERS=${DCS_HOME}/conf/servers + +# Extra ssh options. Empty by default. +# export DCS_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=DCS_CONF_DIR" + +# Where log files are stored. $DCS_HOME/logs by default. +# export DCS_LOG_DIR=${DCS_HOME}/logs + +# Enable remote JDWP debugging of major dcs processes. Meant for Core Developers +# export DCS_MASTER_OPTS="$DCS_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070" +# export DCS_SERVER_OPTS="$DCS_SERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071" +# export DCS_RESET_OPTS="$DCS_RESET_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072" +# export DCS_ZOOKEEPER_OPTS="$DCS_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073" + +# A string representing this instance of dcs. $USER by default. +# export DCS_IDENT_STRING=$USER + +# The scheduling priority for daemon processes. See 'man nice'. +# export DCS_NICENESS=10 + +# The directory where pid files are stored. $DCS_HOME/tmp by default. +# export DCS_PID_DIR=/var/dcs/pids + +# Tell DCS whether it should manage it's own instance of Zookeeper or not. +export DCS_MANAGES_ZK=false + +# Tell DCS where the user program environment lives. +export DCS_USER_PROGRAM_HOME=$TRAF_HOME + +# DCS master port (from dcs-site.xml) +export DCS_MASTER_PORT={{dcs_master_port}} + +# DCS floating IP, if HA is enabled (from dcs-site.xml) +export DCS_MASTER_FLOATING_IP="{{dcs_floating_ip}}" + + + content + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-log4j.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-log4j.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-log4j.xml new file mode 100644 index 0000000..724cb1e --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-log4j.xml @@ -0,0 +1,107 @@ + + + + + + + + content + dcs-log4j template + Custom log4j.properties + +# Define some default values that can be overridden by system properties +dcs.root.logger=INFO, console +dcs.security.logger=INFO, console +dcs.log.dir=. +dcs.log.file=dcs.log + +# Define the root logger to the system property "dcs.root.logger". +log4j.rootLogger=${dcs.root.logger} + +# Logging Threshold +log4j.threshold=ALL + +# +# Daily Rolling File Appender +# +log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFA.File=${dcs.log.dir}/${dcs.log.file} +# Rollver at midnight +log4j.appender.DRFA.DatePattern=.yyyy-MM-dd +# 30-day backup +#log4j.appender.DRFA.MaxBackupIndex=30 +log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout +# Pattern format: Date LogLevel LoggerName LogMessage +log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601}, %p, %c, Node Number: , CPU: , PIN: , Process Name: , , ,%m%n +# Debugging Pattern format +#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n + +# +# Security audit appender +# +dcs.security.log.file=SecurityAuth.audit +log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFAS.File=${dcs.log.dir}/${dcs.security.log.file} +log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout +log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +log4j.category.SecurityLogger=${dcs.security.logger} +log4j.additivity.SecurityLogger=false +#log4j.logger.SecurityLogger.org.apache.hadoop.dcs.security.access.AccessController=TRACE + +# +# Null Appender +# +log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender + +# +# console +# Add "console" to rootlogger above if you want to use this +# +log4j.appender.console=org.apache.log4j.ConsoleAppender +log4j.appender.console.target=System.out +log4j.appender.console.layout=org.apache.log4j.PatternLayout +log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n + +# Custom Logging levels +# Disable ZooKeeper/hbase events +log4j.logger.org.apache.zookeeper=ERROR +log4j.logger.org.apache.hadoop.hbase=ERROR + +# Uncomment this line to enable tracing of DcsMaster +#log4j.logger.org.trafodion.dcs.master.DcsMaster=DEBUG +# Uncomment this line to enable tracing of DcsMaster ServerManager +#log4j.logger.org.trafodion.dcs.master.ServerManager=DEBUG +# Uncomment this line to enable tracing of DcsServer +#log4j.logger.org.trafodion.dcs.server.DcsServer=DEBUG +# Uncomment this line to enable tracing of DcsServer ServerManager +#log4j.logger.org.trafodion.dcs.server.ServerManager=DEBUG + + + content + false + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-site.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-site.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-site.xml new file mode 100644 index 0000000..cd90c98 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/dcs-site.xml @@ -0,0 +1,271 @@ + + + + + + + + dcs.master.port + DCS Master Port + 23400 + + Default port for clients to connect to Trafodion. + + + int + + + + dcs.master.port.range + DCS Master Port Range + 100 + + Default number of connectivity ports. + + + int + + + + dcs.master.info.port + DCS Master Info Port + 24400 + The port for the DCS Master web UI. + Set to -1 if you do not want a User Interface instance run. + + + int + + + + + dcs.master.floating.ip + DCS High Availability Enabled + false + Provides failover to DCS Backup servers (requires floating IP address) + + value-list + + + true + + + + false + + + + 1 + + + + dcs.master.floating.ip.external.ip.address + DCS High Availability - Floating IP address + + Provides failover to DCS Backup servers + + string + true + + + + dcs-site + dcs.master.floating.ip + + + + + + dcs.dns.interface + Network interface for DNS + default + The server uses the local host name for reporting its IP address. If your machine + has multiple interfaces the server will use the interface that the primary + host name resolves to. If this is insufficient, you can set this property + to indicate the primary interface e.g., "eth1". This only works if your cluster + configuration is consistent and every host has the same network interface + configuration. + + + string + + + + dcs.master.floating.ip.external.interface + Network interface for floating IP address + default + The server uses the local host name for reporting its IP address. If your machine + has multiple interfaces the server will use the interface that the primary + host name resolves to. If this is insufficient, you can set this property + to indicate the primary interface e.g., "eth1". This only works if your cluster + configuration is consistent and every host has the same network interface + configuration. + + + string + + + + + dcs.server.user.program.statistics.interval.time + Aggragte Statistics Interval + 60 + + Time in seconds on how often the aggregation data should be published. + Setting this value to '0' will revert to default. + Setting this value to '-1' will disable publishing aggregation data. + + + int + -1 + Seconds + + + + dcs.server.user.program.statistics.limit.time + Query Statistics Threshold + 60 + + Time in seconds for how long the query has been executing before publishing + statistics to metric_query_table. To publish all queries set this value to + '0'. Setting this value to '-1' will disable publishing any data to + metric_query_table. + The default is 60. + Warning - Setting this value to 0 will cause query performance to degrade + + + int + -1 + Seconds + + + + dcs.server.user.program.statistics.enabled + Query statistics publication + true + + If statistics publication is enabled. + + + boolean + + + + + + dcs.info.threads.max + Info threads maximum + 100 + + The maximum number of threads of the info server thread pool. + Threads in the pool are reused to process requests. This + controls the maximum number of requests processed concurrently. + It may help to control the memory used by the info server to + avoid out of memory issues. If the thread pool is full, incoming requests + will be queued up and wait for some free threads. + + + int + + + + dcs.info.threads.min + Info threads minimum + 2 + + The minimum number of threads of the info server thread pool. + The thread pool always has at least these number of threads so + the info server is ready to serve incoming requests. + + + int + + + + dcs.server.handler.threads.max + Server threads maximum + 10 + + For every DcsServer specified in the conf/servers file the maximum number of server handler threads that will be created. There can never be more than this value for any given DcsServer. + + + int + + + + + zookeeper.session.timeout + ZooKeeper session timeout + 180000 + + The server passes this to the ZooKeeper quorum as suggested maximum time for a + session (This setting becomes ZooKeeper's 'maxSessionTimeout'). See + http://hadoop.apache.org/ZooKeeper/docs/current/ZooKeeperProgrammers.html#ch_zkSessions + "The client sends a requested timeout, the server responds with the + timeout that it can give the client. " In milliseconds. + + + int + + + + zookeeper.znode.parent + Root znode in ZooKeeper + /${user.name} + All of dcs's ZooKeeper + znodes that are configured with a relative path will go under this node. + By default, all of dcs's ZooKeeper file path are configured with a + relative path, so they will all go under this directory unless changed. + + + string + + + + dcs.zookeeper.quorum + ZooKeeper quorum + {{zookeeper_quorum_hosts}} + Comma separated list of servers in the ZooKeeper Quorum. + For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". + + + string + + + + + dcs.zookeeper.property.clientPort + ZooKeeper client port + {{zookeeper_clientPort}} + + The port at which ZooKeeper is listening for clients. + + + string + + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/rest-site.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/rest-site.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/rest-site.xml new file mode 100644 index 0000000..d1b158f --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/rest-site.xml @@ -0,0 +1,157 @@ + + + + + + + + rest.dns.interface + Network interface for DNS + default + The server uses the local host name for reporting its IP address. If your machine + has multiple interfaces the server will use the interface that the primary + host name resolves to. If this is insufficient, you can set this property + to indicate the primary interface e.g., "eth1". This only works if your cluster + configuration is consistent and every host has the same network interface + configuration. + + + string + + + + zookeeper.session.timeout + ZooKeeper session timeout + 180000 + + The server passes this to the ZooKeeper quorum as suggested maximum time for a + session (This setting becomes ZooKeeper's 'maxSessionTimeout'). See + http://hadoop.apache.org/ZooKeeper/docs/current/ZooKeeperProgrammers.html#ch_zkSessions + "The client sends a requested timeout, the server responds with the + timeout that it can give the client. " In milliseconds. + + + int + + + + zookeeper.znode.parent + Root znode in ZooKeeper + /${user.name} + The server will look for DCS znodes under this znode + and will create any REST server specific znodes here as well. + + + string + + + + + rest.zookeeper.quorum + ZooKeeper quorum + {{zookeeper_quorum_hosts}} + Comma separated list of servers in the ZooKeeper Quorum. + For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". + + + string + + + + + + rest.zookeeper.property.clientPort + ZooKeeper client port + {{zookeeper_clientPort}} + + The port at which ZooKeeper is listening for clients. + + + string + + + + rest.port + Trafodion REST port + 4200 + The http port for the REST server. + + int + + + + rest.https.port + Trafodion REST secure port + 4201 + The https port for the REST server. + + int + + + + rest.readonly + Trafodion REST read-only mode + false + + Mode the REST server will be started in. + false: All HTTP methods are permitted - GET/PUT/POST/DELETE. + true: Only the GET method is permitted. + + + boolean + + + + rest.threads.max + Trafodion REST maximum threads + 100 + + The maximum number of threads of the server thread pool. + Threads in the pool are reused to process requests. This + controls the maximum number of requests processed concurrently. + It may help to control the memory used by the server to + avoid out of memory issues. If the thread pool is full, incoming requests + will be queued up and wait for some free threads. + + + int + + + + rest.threads.min + Trafodion REST minimum threads + 2 + + The minimum number of threads of the server thread pool. + The thread pool always has at least these number of threads so + the server is ready to serve incoming requests. + + + int + + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/traf-cluster-env.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/traf-cluster-env.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/traf-cluster-env.xml new file mode 100644 index 0000000..dca5d97 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/traf-cluster-env.xml @@ -0,0 +1,41 @@ + + + + + + + content + traf-cluster-env template + Template for cluster-env.sh file + +export NODE_LIST="{{traf_nodes}}" +export MY_NODES="{{traf_w_nodes}}" +export node_count="{{traf_node_count}}" + + + + content + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/trafodion-env.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/trafodion-env.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/trafodion-env.xml new file mode 100644 index 0000000..c2cdf22 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/configuration/trafodion-env.xml @@ -0,0 +1,252 @@ + + + + + + + + + + traf.db.admin + Trafodion DB Admin User + DB__ADMINUSER + Database Admin Role + + string + + + + traf.node.dir + Trafodion local working directories + /hadoop/trafodion/work + List of directories (comma seperated) for trafodion scratch space + + string + + + + + traf.ldap.enabled + LDAP authentication enabled + NO + Enable Trafodion user authentication via LDAP + + value-list + + + YES + + + + NO + + + + 1 + + + + traf.ldap.hosts + LDAP server list + + List of LDAP hostnames (comma separated) + + + trafodion-env + traf.ldap.enabled + + + + string + true + + + + traf.ldap.port + LDAP port number + 389 + LDAP server(s) port number (Example: 389 for no encryption or TLS, 636 for SSL) + + int + + + + traf.ldap.identifiers + LDAP unique identifiers + + All LDAP unique identifiers (blank separated) + + string + true + + + + traf.ldap.encrypt + LDAP encryption level + 0 + LDAP encryption level must match LDAP server(s) + + value-list + + + 0 + + + + 1 + + + + 2 + + + + 1 + + + + traf.ldap.certpath + LDAP encryption certificate file path + + File path of SSL/TLS certificate file (*.pem) + + + trafodion-env + traf.ldap.encrypt + + + + string + true + + + + traf.ldap.user + LDAP search user name + + User name for LDAP search (if required by LDAP server) + + string + true + + + + traf.ldap.pwd + LDAP search password + + Password for LDAP search (if required by LDAP server) + + password + true + + + + + traf.sshkey.priv + Generated SSH Key + + + Generated value, do not modify + + password + + + + content + trafodion-env template + Template for trafodion-env.sh file + +# sourced from /etc/trafodion/trafodion_config +export JAVA_HOME={{java_home}} + +export TRAF_USER={{traf_user}} + +export DB_ADMIN_USER={{traf_db_admin}} + +export HADOOP_TYPE="hortonworks" + +export TRAFODION_ENABLE_AUTHENTICATION={{traf_ldap_enabled}} + + + + content + + + + ldap_content + trafodion ldap template + Template for .traf_authentication_config file + +# To use authentication in Trafodion, this file must be configured +# as described below and placed in $TRAF_HOME/sql/scripts and be named +# .traf_authentication_config. You must also enable authentication by +# running the script traf_authentication_setup in $TRAF_HOME/sql/scripts. +# +# NOTE: the format of this configuration file is expected to change in the +# next release of Trafodion. Backward compatilibity is not guaranteed. +# +SECTION: Defaults + DefaultSectionName: local + RefreshTime: 1800 + TLS_CACERTFilename: {{ ldap_certpath }} +SECTION: local +# If one or more of the LDAPHostName values is a load balancing host, list +# the name(s) here, one name: value pair for each host. + LoadBalanceHostName: + +# One or more identically configured hosts must be specified here, +# one name: value pair for each host. + LdapHostname: {{ ldap_hosts }} + +# Default is port 389, change if using 636 or any other port + LdapPort: {{ ldap_port }} + +# Must specify one or more unique identifiers, one name: value pair for each + UniqueIdentifier: {{ ldap_identifiers }} + +# If the configured LDAP server requires a username and password to +# to perform name lookup, provide those here. + LDAPSearchDN: {{ ldap_user }} + LDAPSearchPwd: {{ ldap_pwd }} + +# If configured LDAP server requires TLS(1) or SSL (2), update this value + LDAPSSL: {{ ldap_encrypt }} + +# Default timeout values in seconds + LDAPNetworkTimeout: 30 + LDAPTimeout: 30 + LDAPTimeLimit: 30 + +# Default values for retry logic algorithm + RetryCount: 5 + RetryDelay: 2 + PreserveConnection: No + ExcludeBadHosts: Yes + MaxExcludeListSize: 3 + + + content + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/metainfo.xml ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/metainfo.xml b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/metainfo.xml new file mode 100644 index 0000000..8ee95e9 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/metainfo.xml @@ -0,0 +1,165 @@ + + + + + 2.0 + + + + TRAFODION + + Trafodion + + Transactional SQL-on-Hadoop Database + + 2.1 + + + theme.json + true + + + + + + TRAF_MASTER + Trafodion Master + MASTER + + 1 + + + + PYTHON + 5000 + + + + Initialize + + + PYTHON + 5000 + + + + + + TRAF_NODE + Trafodion Node + SLAVE + + 1-1000 + + + + PYTHON + 5000 + + + + TRAF_DCS_PRIME + Trafodion DCS Master + MASTER + + 1 + + + + PYTHON + 5000 + + + + TRAF_DCS_SECOND + Trafodion DCS Backup + MASTER + + 0-10 + + + + PYTHON + 5000 + + + + + + env + trafodion-env.sh + trafodion-env + + + env + traf-cluster-env.sh + traf-cluster-env + + + env + dcs-env.sh + dcs-env + + + env + log4j.properties + dcs-log4j + + + xml + rest-site.xml + rest-site + + + xml + dcs-site.xml + dcs-site + + + + + + redhat6 + + apache-trafodion_server + + + + + HDFS + HBASE + HIVE + ZOOKEEPER + + + trafodion-env + traf-cluster-env + dcs-env + dcs-log4j + dcs-site + rest-site + + + + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/params.py ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/params.py b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/params.py new file mode 100755 index 0000000..9890975 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/params.py @@ -0,0 +1,95 @@ +#!/usr/bin/env python +# @@@ START COPYRIGHT @@@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# @@@ END COPYRIGHT @@@ +from resource_management import * + +# config object that holds the configurations declared in the config xml file +config = Script.get_config() + +java_home = config['hostLevelParams']['java_home'] +java_version = int(config['hostLevelParams']['java_version']) + +dcs_servers = config['configurations']['dcs-env']['dcs.servers'] +dcs_master_port = config['configurations']['dcs-site']['dcs.master.port'] +dcs_info_port = config['configurations']['dcs-site']['dcs.master.info.port'] +dcs_floating_ip = config['configurations']['dcs-site']['dcs.master.floating.ip.external.ip.address'] +dcs_mast_node_list = default("/clusterHostInfo/traf_dcs_prime_hosts", '') +dcs_back_node_list = default("/clusterHostInfo/traf_dcs_second_hosts", '') +dcs_env_template = config['configurations']['dcs-env']['content'] +dcs_log4j_template = config['configurations']['dcs-log4j']['content'] + +zookeeper_quorum_hosts = ",".join(config['clusterHostInfo']['zookeeper_hosts']) +if 'zoo.cfg' in config['configurations'] and 'clientPort' in config['configurations']['zoo.cfg']: + zookeeper_clientPort = config['configurations']['zoo.cfg']['clientPort'] +else: + zookeeper_clientPort = '2181' + +traf_db_admin = config['configurations']['trafodion-env']['traf.db.admin'] + +traf_conf_dir = '/etc/trafodion/conf' # path is hard-coded in /etc/trafodion/trafodion_config +traf_env_template = config['configurations']['trafodion-env']['content'] +traf_clust_template = config['configurations']['traf-cluster-env']['content'] + +traf_user = 'trafodion' +traf_group = 'trafodion' +hdfs_user = config['configurations']['hadoop-env']['hdfs_user'] +hbase_user = config['configurations']['hbase-env']['hbase_user'] +hbase_staging = config['configurations']['hbase-site']['hbase.bulkload.staging.dir'] + +traf_priv_key = config['configurations']['trafodion-env']['traf.sshkey.priv'] + +traf_node_list = default("/clusterHostInfo/traf_node_hosts", '') + +traf_scratch = config['configurations']['trafodion-env']['traf.node.dir'] + +traf_ldap_template = config['configurations']['trafodion-env']['ldap_content'] +traf_ldap_enabled = config['configurations']['trafodion-env']['traf.ldap.enabled'] +ldap_hosts = config['configurations']['trafodion-env']['traf.ldap.hosts'] +ldap_port = config['configurations']['trafodion-env']['traf.ldap.port'] +ldap_identifiers = config['configurations']['trafodion-env']['traf.ldap.identifiers'] +ldap_user = config['configurations']['trafodion-env']['traf.ldap.user'] +ldap_pwd = config['configurations']['trafodion-env']['traf.ldap.pwd'] +ldap_encrypt = config['configurations']['trafodion-env']['traf.ldap.encrypt'] +ldap_certpath = config['configurations']['trafodion-env']['traf.ldap.certpath'] + +#HDFS Dir creation +hostname = config["hostname"] +hadoop_conf_dir = "/etc/hadoop/conf" +hdfs_user_keytab = config['configurations']['hadoop-env']['hdfs_user_keytab'] +security_enabled = config['configurations']['cluster-env']['security_enabled'] +kinit_path_local = functions.get_kinit_path(default('/configurations/kerberos-env/executable_search_paths', None)) +hdfs_site = config['configurations']['hdfs-site'] +default_fs = config['configurations']['core-site']['fs.defaultFS'] +import functools +#create partial functions with common arguments for every HdfsDirectory call +#to create hdfs directory we need to call params.HdfsDirectory in code +HdfsDirectory = functools.partial( + HdfsResource, + type="directory", + hadoop_conf_dir=hadoop_conf_dir, + user=hdfs_user, + hdfs_site=hdfs_site, + default_fs=default_fs, + security_enabled = security_enabled, + keytab = hdfs_user_keytab, + kinit_path_local = kinit_path_local +) + http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/status_params.py ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/status_params.py b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/status_params.py new file mode 100755 index 0000000..e5e6507 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/status_params.py @@ -0,0 +1,23 @@ +#!/usr/bin/env python +# @@@ START COPYRIGHT @@@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# @@@ END COPYRIGHT @@@ + +traf_user = 'trafodion' http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodiondcs.py ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodiondcs.py b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodiondcs.py new file mode 100755 index 0000000..24b9623 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodiondcs.py @@ -0,0 +1,60 @@ +# @@@ START COPYRIGHT @@@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# @@@ END COPYRIGHT @@@ +import subprocess +from resource_management import * +from tempfile import TemporaryFile + +class DCS(Script): + def install(self, env): + + # Install packages listed in metainfo.xml + self.install_packages(env) + + def configure(self, env): + return True + + def stop(self, env): + import params + Execute('source ~/.bashrc ; dcsstop',user=params.traf_user) + + # REST should run on all DCS backup and master nodes + def start(self, env): + import params + Execute('source ~/.bashrc ; sqcheck -f -c rest || reststart',user=params.traf_user) + Execute('source ~/.bashrc ; dcsstart',user=params.traf_user) + + # Check master pidfile + def status(self, env): + import status_params + cmd = "source ~%s/.bashrc >/dev/null 2>&1; ls $DCS_INSTALL_DIR/tmp/dcs*master.pid" % status_params.traf_user + ofile = TemporaryFile() + try: + Execute(cmd,stdout=ofile) # cannot switch user in status mode for some reason + except: + ofile.close() + raise ComponentIsNotRunning() + ofile.seek(0) # read from beginning + pidfile = ofile.read().rstrip() + ofile.close() + check_process_status(pidfile) + +if __name__ == "__main__": + DCS().execute() http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionmaster.py ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionmaster.py b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionmaster.py new file mode 100755 index 0000000..c7d9a29 --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionmaster.py @@ -0,0 +1,186 @@ +# @@@ START COPYRIGHT @@@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# @@@ END COPYRIGHT @@@ +import sys, os +from resource_management import * +from tempfile import TemporaryFile + +class Master(Script): + def install(self, env): + + # Install packages listed in metainfo.xml + self.install_packages(env) + self.configure(env) + + def configure(self, env): + import params + + # generate sqconfig file + cmd = "lscpu|grep -E '(^CPU\(s\)|^Socket\(s\))'|awk '{print $2}'" + ofile = TemporaryFile() + Execute(cmd,stdout=ofile) + ofile.seek(0) # read from beginning + core, processor = ofile.read().split('\n')[:2] + ofile.close() + + core = int(core)-1 if int(core) <= 256 else 255 + + lines = ['begin node\n'] + loc_node_list = [] + for node_id, node in enumerate(params.traf_node_list): + # find the local hostname for each node + cmd = "ssh -q %s hostname" % node + ofile = TemporaryFile() + Execute(cmd,user=params.traf_user,stdout=ofile) + ofile.seek(0) # read from beginning + localhn = ofile.readline().rstrip() + ofile.close() + cmd = "ssh %s 'echo success'" % localhn + Execute(cmd,user=params.traf_user) # verify we can use this hostname to communicate + + line = 'node-id=%s;node-name=%s;cores=0-%d;processors=%s;roles=connection,aggregation,storage\n' \ + % (node_id, localhn, core, processor) + lines.append(line) + loc_node_list.append(localhn) + + lines.append('end node\n') + lines.append('\n') + lines.append('begin overflow\n') + for scratch_loc in params.traf_scratch.split(','): + line = 'hdd %s\n' % scratch_loc + lines.append(line) + lines.append('end overflow\n') + + # write sqconfig in trafodion home dir + trafhome = os.path.expanduser("~" + params.traf_user) + File(os.path.join(trafhome,"sqconfig"), + owner = params.traf_user, + group = params.traf_user, + content=''.join(lines), + mode=0644) + + # install sqconfig + Execute('source ~/.bashrc ; mv -f ~/sqconfig $TRAF_HOME/sql/scripts/',user=params.traf_user) + + # write cluster-env in trafodion home dir + traf_nodes = ' '.join(loc_node_list) + traf_w_nodes = '-w ' + ' -w '.join(loc_node_list) + traf_node_count = len(loc_node_list) + if traf_node_count != len(params.traf_node_list): + print "Error cannot determine local hostname for all Trafodion nodes" + exit(1) + + cl_env_temp = os.path.join(trafhome,"traf-cluster-env.sh") + File(cl_env_temp, + owner = params.traf_user, + group = params.traf_user, + content=InlineTemplate(params.traf_clust_template, + traf_nodes=traf_nodes, + traf_w_nodes=traf_w_nodes, + traf_node_count=traf_node_count), + mode=0644) + + # install cluster-env on all nodes + for node in params.traf_node_list: + cmd = "scp %s %s:%s/" % (cl_env_temp, node, params.traf_conf_dir) + Execute(cmd,user=params.traf_user) + cmd = "rm -f %s" % (cl_env_temp) + Execute(cmd,user=params.traf_user) + + # Execute SQ gen + Execute('source ~/.bashrc ; sqgen',user=params.traf_user) + + + #To stop the service, use the linux service stop command and pipe output to log file + def stop(self, env): + import params + Execute('source ~/.bashrc ; sqstop',user=params.traf_user) + + #To start the service, use the linux service start command and pipe output to log file + def start(self, env): + import params + self.configure(env) + + # Check HDFS set up + # Must be in start section, since we need HDFS running + params.HdfsDirectory("/hbase/archive", + action="create_on_execute", + owner=params.hbase_user, + group=params.hbase_user, + ) + params.HdfsDirectory(params.hbase_staging, + action="create_on_execute", + owner=params.hbase_user, + group=params.hbase_user, + ) + params.HdfsDirectory("/user/trafodion/trafodion_backups", + action="create_on_execute", + owner=params.traf_user, + group=params.traf_group, + ) + params.HdfsDirectory("/user/trafodion/bulkload", + action="create_on_execute", + owner=params.traf_user, + group=params.traf_group, + ) + params.HdfsDirectory("/user/trafodion/lobs", + action="create_on_execute", + owner=params.traf_user, + group=params.traf_group, + ) + params.HdfsDirectory(None, action="execute") + + try: + cmd = "hdfs dfs -setfacl -R -m user:%s:rwx,default:user:%s:rwx,mask::rwx /hbase/archive" % \ + (params.traf_user, params.traf_user) + Execute(cmd,user=params.hdfs_user) + except: + print "Error: HDFS ACLs must be enabled for config of hdfs:/hbase/archive" + print " Re-start HDFS, HBase, and other affected components before starting Trafodion" + raise Fail("Need HDFS component re-start") + + # Start trafodion + Execute('source ~/.bashrc ; sqstart',user=params.traf_user,logoutput=True) + + def status(self, env): + import status_params + try: + Execute('source ~/.bashrc ; sqshell -c node info | grep $(hostname) | grep -q Up',user=status_params.traf_user) + except: + raise ComponentIsNotRunning() + + def initialize(self, env): + import params + cmd = "source ~/.bashrc ; echo 'initialize Trafodion;' | sqlci" + ofile = TemporaryFile() + Execute(cmd,user=params.traf_user,stdout=ofile,stderr=ofile,logoutput=True) + ofile.seek(0) # read from beginning + output = ofile.read() + ofile.close() + + if (output.find('1395') >= 0 or output.find('1392') >= 0): + print output + '\n' + print "Re-trying initialize as upgrade\n" + cmd = "source ~/.bashrc ; echo 'initialize Trafodion, upgrade;' | sqlci" + Execute(cmd,user=params.traf_user,logoutput=True) + + +if __name__ == "__main__": + Master().execute() http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/d8a150c0/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionnode.py ---------------------------------------------------------------------- diff --git a/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionnode.py b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionnode.py new file mode 100755 index 0000000..af943ca --- /dev/null +++ b/install/ambari-installer/traf-mpack/common-services/TRAFODION/2.1/package/scripts/trafodionnode.py @@ -0,0 +1,221 @@ +# @@@ START COPYRIGHT @@@ +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# @@@ END COPYRIGHT @@@ +import sys, os, pwd, signal, time +from resource_management import * + +class Node(Script): + def install(self, env): + + # Install packages listed in metainfo.xml + self.install_packages(env) + + self.configure(env) + + def configure(self, env): + import params + + ################## + # trafodion cluster-wide ssh config + trafhome = os.path.expanduser("~" + params.traf_user) + Directory(os.path.join(trafhome,".ssh"), + mode=0700, + owner = params.traf_user, + group = params.traf_group) + # private key generated on ambari server + File(os.path.join(trafhome,".ssh/id_rsa"), + owner = params.traf_user, + group = params.traf_group, + content=params.traf_priv_key, + mode=0600) + + # generate public key from the private one + cmd = "ssh-keygen -y -f " + trafhome + "/.ssh/id_rsa > " + trafhome + "/.ssh/id_rsa.pub" + Execute(cmd,user=params.traf_user) + cmd = "cat " + trafhome + "/.ssh/id_rsa.pub >> " + trafhome + "/.ssh/authorized_keys" + Execute(cmd,user=params.traf_user) + cmd = "chmod 0600 " + trafhome + "/.ssh/authorized_keys" + Execute(cmd,user=params.traf_user) + sshopt = format('Host *\n' + ' StrictHostKeyChecking=no\n') + File(os.path.join(trafhome,".ssh/config"), + owner = params.traf_user, + group = params.traf_group, + content=sshopt, + mode=0600) + + # env files use java_home, be sure we are on 1.8 + # might be better to check this earlier (in service_advisor.py) + if params.java_version < 8: + print "Error: Java 1.8 required for Trafodion and HBase" + print " Use 'ambari setup' to change JDK and restart HBase before continuing" + exit(1) + ################## + # create env files + env.set_params(params) + Directory(params.traf_conf_dir, + mode=0755, + owner = params.traf_user, + group = params.traf_group, + create_parents = True) + traf_conf_path = os.path.join(params.traf_conf_dir, "trafodion-env.sh") + File(traf_conf_path, + owner = params.traf_user, + group = params.traf_group, + content=InlineTemplate(params.traf_env_template,trim_blocks=False), + mode=0644) + # cluster file will be over-written by trafodionmaster install + # until then, make file that shell can source without error + traf_conf_path = os.path.join(params.traf_conf_dir, "traf-cluster-env.sh") + File(traf_conf_path, + owner = params.traf_user, + group = params.traf_group, + content="# place-holder", + mode=0644) + # initialize & verify env (e.g., creates $TRAF_HOME/tmp as trafodion user) + cmd = "source ~/.bashrc" + Execute(cmd,user=params.traf_user) + + ################## + # Link TRX files into HBase lib dir + hlib = "/usr/hdp/current/hbase-regionserver/lib/" + trx = "$TRAF_HOME/export/lib/hbase-trx-hdp2_3-${TRAFODION_VER}.jar" + util = "$TRAF_HOME/export/lib/trafodion-utility-${TRAFODION_VER}.jar" + + # run as root, but expand variables using trafodion env + # must be after trafodion user already initializes bashrc + cmd = "source ~" + params.traf_user + "/.bashrc ; ln -f -s " + trx + " " + hlib + Execute(cmd) + cmd = "source ~" + params.traf_user + "/.bashrc ; ln -f -s " + util + " " + hlib + Execute(cmd) + + ################## + # LDAP config + # In future, should move to traf_conf_dir + if params.traf_ldap_enabled == 'YES': + File(os.path.join(trafhome,".traf_authentication_config"), + owner = params.traf_user, + group = params.traf_group, + content = InlineTemplate(params.traf_ldap_template), + mode=0750) + cmd = "source ~/.bashrc ; mv -f ~/.traf_authentication_config $TRAF_HOME/sql/scripts/" + Execute(cmd,user=params.traf_user) + cmd = "source ~/.bashrc ; ldapconfigcheck -file $TRAF_HOME/sql/scripts/.traf_authentication_config" + Execute(cmd,user=params.traf_user) + cmd = 'ldapcheck --verbose --username=%s' % traf_db_admin + Execute(cmd,user=params.traf_user) + + ################## + # All Trafodion Nodes need DCS config files + # In future, should move DCS conf to traf_conf_dir + File(os.path.join(trafhome,"dcs-env.sh"), + owner = params.traf_user, + group = params.traf_group, + content = InlineTemplate(params.dcs_env_template), + mode=0644) + File(os.path.join(trafhome,"log4j.properties"), + owner = params.traf_user, + group = params.traf_group, + content = InlineTemplate(params.dcs_log4j_template), + mode=0644) + + serverlist = params.dcs_mast_node_list[0] + '\n' + File(os.path.join(trafhome,"master"), + owner = params.traf_user, + group = params.traf_group, + content = serverlist, + mode=0644) + + serverlist = '\n'.join(params.dcs_back_node_list) + '\n' + File(os.path.join(trafhome,"backup-masters"), + owner = params.traf_user, + group = params.traf_group, + content = serverlist, + mode=0644) + + serverlist = '' + node_cnt = len(params.traf_node_list) + per_node = int(params.dcs_servers) // node_cnt + extra = int(params.dcs_servers) % node_cnt + for nnum, node in enumerate(params.traf_node_list, start=0): + if nnum < extra: + serverlist += '%s %s\n' % (node, per_node + 1) + else: + serverlist += '%s %s\n' % (node, per_node) + File(os.path.join(trafhome,"servers"), + owner = params.traf_user, + group = params.traf_group, + content = serverlist, + mode=0644) + XmlConfig("dcs-site.xml", + conf_dir=trafhome, + configurations=params.config['configurations']['dcs-site'], + owner=params.traf_user, + mode=0644) + # install DCS conf files + cmd = "source ~/.bashrc ; mv -f ~/dcs-env.sh ~/log4j.properties ~/dcs-site.xml ~/master ~/backup-masters ~/servers $DCS_INSTALL_DIR/conf/" + Execute(cmd,user=params.traf_user) + + XmlConfig("rest-site.xml", + conf_dir=trafhome, + configurations=params.config['configurations']['rest-site'], + owner=params.traf_user, + mode=0644) + # install REST conf files + cmd = "source ~/.bashrc ; mv -f ~/rest-site.xml $REST_INSTALL_DIR/conf/" + Execute(cmd,user=params.traf_user) + + + + ################## + # create trafodion scratch dirs + for sdir in params.traf_scratch.split(','): + Directory(sdir, + mode=0777, + owner = params.traf_user, + group = params.traf_group, + create_parents = True) + + + # Master component does real stop/start for cluster, + # but for ambari restart, provide expected status + def stop(self, env): + import status_params + Execute('touch ~/ambari_node_stop',user=status_params.traf_user) + return True + + def start(self, env): + import status_params + Execute('rm -f ~/ambari_node_stop',user=status_params.traf_user) + self.configure(env) + return True + + def status(self, env): + import status_params + try: + Execute('ls ~/ambari_node_stop && exit 1 || exit 0',user=status_params.traf_user) + except: + raise ComponentIsNotRunning() + try: + Execute('source ~/.bashrc ; sqshell -c node info | grep $(hostname) | grep -q Up',user=status_params.traf_user) + except: + raise ComponentIsNotRunning() + +if __name__ == "__main__": + Node().execute()