Return-Path: X-Original-To: apmail-ambari-commits-archive@www.apache.org Delivered-To: apmail-ambari-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0DD9B10E3D for ; Fri, 17 Jan 2014 23:40:53 +0000 (UTC) Received: (qmail 47972 invoked by uid 500); 17 Jan 2014 23:40:23 -0000 Delivered-To: apmail-ambari-commits-archive@ambari.apache.org Received: (qmail 47185 invoked by uid 500); 17 Jan 2014 23:40:01 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 47114 invoked by uid 99); 17 Jan 2014 23:39:59 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jan 2014 23:39:59 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 8C0E337F75; Fri, 17 Jan 2014 23:39:58 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: mahadev@apache.org To: commits@ambari.apache.org Date: Fri, 17 Jan 2014 23:40:23 -0000 Message-Id: In-Reply-To: <4935d9911ec64bd28475bb1e3b503fb9@git.apache.org> References: <4935d9911ec64bd28475bb1e3b503fb9@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [27/37] AMBARI-4341. Rename 2.0.8 to 2.1.1 in the stack definition. (mahadev) http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/setupGanglia.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/setupGanglia.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/setupGanglia.sh deleted file mode 100644 index 5145b9c..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/setupGanglia.sh +++ /dev/null @@ -1,141 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get access to Ganglia-wide constants, utilities etc. -source ./gangliaLib.sh - -function usage() -{ - cat << END_USAGE -Usage: ${0} [-c [-m]] [-t] [-o ] [-g ] - -Options: - -c The name of the Ganglia Cluster whose gmond configuration we're here to generate. - - -m Whether this gmond (if -t is not specified) is the master for its Ganglia - Cluster. Without this, we generate slave gmond configuration. - - -t Whether this is a call to generate gmetad configuration (as opposed to the - gmond configuration that is generated without this). - -o Owner - -g Group -END_USAGE -} - -function instantiateGmetadConf() -{ - # gmetad utility library. - source ./gmetadLib.sh; - - generateGmetadConf > ${GMETAD_CONF_FILE}; -} - -function instantiateGmondConf() -{ - # gmond utility library. - source ./gmondLib.sh; - - gmondClusterName=${1}; - - if [ "x" != "x${gmondClusterName}" ] - then - - createDirectory "${GANGLIA_RUNTIME_DIR}/${gmondClusterName}"; - createDirectory "${GANGLIA_CONF_DIR}/${gmondClusterName}/conf.d"; - - # Always blindly generate the core gmond config - that goes on every box running gmond. - generateGmondCoreConf ${gmondClusterName} > `getGmondCoreConfFileName ${gmondClusterName}`; - - isMasterGmond=${2}; - - # Decide whether we want to add on the master or slave gmond config. - if [ "0" -eq "${isMasterGmond}" ] - then - generateGmondSlaveConf ${gmondClusterName} > `getGmondSlaveConfFileName ${gmondClusterName}`; - else - generateGmondMasterConf ${gmondClusterName} > `getGmondMasterConfFileName ${gmondClusterName}`; - fi - - chown -R ${3}:${4} ${GANGLIA_CONF_DIR}/${gmondClusterName} - - else - echo "No gmondClusterName passed in, nothing to instantiate"; - fi -} - -# main() - -gmondClusterName=; -isMasterGmond=0; -configureGmetad=0; -owner='root'; -group='root'; - -while getopts ":c:mto:g:" OPTION -do - case ${OPTION} in - c) - gmondClusterName=${OPTARG}; - ;; - m) - isMasterGmond=1; - ;; - t) - configureGmetad=1; - ;; - o) - owner=${OPTARG}; - ;; - g) - group=${OPTARG}; - ;; - ?) - usage; - exit 1; - esac -done - -# Initialization. -createDirectory ${GANGLIA_CONF_DIR}; -createDirectory ${GANGLIA_RUNTIME_DIR}; -# So rrdcached can drop its PID files in here. -chmod a+w ${GANGLIA_RUNTIME_DIR}; -chown ${owner}:${group} ${GANGLIA_CONF_DIR}; - -if [ -n "${gmondClusterName}" ] -then - - # Be forgiving of users who pass in -c along with -t (which always takes precedence). - if [ "1" -eq "${configureGmetad}" ] - then - instantiateGmetadConf; - else - instantiateGmondConf ${gmondClusterName} ${isMasterGmond} ${owner} ${group}; - fi - -elif [ "1" -eq "${configureGmetad}" ] -then - instantiateGmetadConf; -else - usage; - exit 2; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmetad.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmetad.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmetad.sh deleted file mode 100644 index ab5102d..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmetad.sh +++ /dev/null @@ -1,64 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get all our common constants etc. set up. -source ./gmetadLib.sh; - -# To get access to ${RRDCACHED_ALL_ACCESS_UNIX_SOCKET}. -source ./rrdcachedLib.sh; - -# Before starting gmetad, start rrdcached. -./startRrdcached.sh; - -if [ $? -eq 0 ] -then - gmetadRunningPid=`getGmetadRunningPid`; - - # Only attempt to start gmetad if there's not already one running. - if [ -z "${gmetadRunningPid}" ] - then - env RRDCACHED_ADDRESS=${RRDCACHED_ALL_ACCESS_UNIX_SOCKET} \ - ${GMETAD_BIN} --conf=${GMETAD_CONF_FILE} --pid-file=${GMETAD_PID_FILE}; - - for i in `seq 0 5`; do - gmetadRunningPid=`getGmetadRunningPid`; - if [ -n "${gmetadRunningPid}" ] - then - break; - fi - sleep 1; - done - - if [ -n "${gmetadRunningPid}" ] - then - echo "Started ${GMETAD_BIN} with PID ${gmetadRunningPid}"; - else - echo "Failed to start ${GMETAD_BIN}"; - exit 1; - fi - else - echo "${GMETAD_BIN} already running with PID ${gmetadRunningPid}"; - fi -else - echo "Not starting ${GMETAD_BIN} because starting ${RRDCACHED_BIN} failed."; - exit 2; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmond.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmond.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmond.sh deleted file mode 100644 index 239b62e..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startGmond.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get all our common constants etc. set up. -# Pulls in gangliaLib.sh as well, so we can skip pulling it in again. -source ./gmondLib.sh; - -function startGmondForCluster() -{ - gmondClusterName=${1}; - - gmondRunningPid=`getGmondRunningPid ${gmondClusterName}`; - - # Only attempt to start gmond if there's not already one running. - if [ -z "${gmondRunningPid}" ] - then - gmondCoreConfFileName=`getGmondCoreConfFileName ${gmondClusterName}`; - - if [ -e "${gmondCoreConfFileName}" ] - then - gmondPidFileName=`getGmondPidFileName ${gmondClusterName}`; - - ${GMOND_BIN} --conf=${gmondCoreConfFileName} --pid-file=${gmondPidFileName}; - - for i in `seq 0 5`; do - gmondRunningPid=`getGmondRunningPid ${gmondClusterName}`; - if [ -n "${gmondRunningPid}" ] - then - break; - fi - sleep 1; - done - - if [ -n "${gmondRunningPid}" ] - then - echo "Started ${GMOND_BIN} for cluster ${gmondClusterName} with PID ${gmondRunningPid}"; - else - echo "Failed to start ${GMOND_BIN} for cluster ${gmondClusterName}"; - exit 1; - fi - fi - else - echo "${GMOND_BIN} for cluster ${gmondClusterName} already running with PID ${gmondRunningPid}"; - fi -} - -# main() -gmondClusterName=${1}; - -if [ "x" == "x${gmondClusterName}" ] -then - # No ${gmondClusterName} passed in as command-line arg, so start - # all the gmonds we know about. - for gmondClusterName in `getConfiguredGangliaClusterNames` - do - startGmondForCluster ${gmondClusterName}; - done -else - # Just start the one ${gmondClusterName} that was asked for. - startGmondForCluster ${gmondClusterName}; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startRrdcached.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startRrdcached.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startRrdcached.sh deleted file mode 100644 index e79472b..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/startRrdcached.sh +++ /dev/null @@ -1,69 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Slurp in all our user-customizable settings. -source ./gangliaEnv.sh; - -# Get all our common constants etc. set up. -source ./rrdcachedLib.sh; - -rrdcachedRunningPid=`getRrdcachedRunningPid`; - -# Only attempt to start rrdcached if there's not already one running. -if [ -z "${rrdcachedRunningPid}" ] -then - #changed because problem puppet had with nobody user - #sudo -u ${GMETAD_USER} ${RRDCACHED_BIN} -p ${RRDCACHED_PID_FILE} \ - # -m 664 -l unix:${RRDCACHED_ALL_ACCESS_UNIX_SOCKET} \ - # -m 777 -P FLUSH,STATS,HELP -l unix:${RRDCACHED_LIMITED_ACCESS_UNIX_SOCKET} \ - # -b /var/lib/ganglia/rrds -B - su - ${GMETAD_USER} -c "${RRDCACHED_BIN} -p ${RRDCACHED_PID_FILE} \ - -m 664 -l unix:${RRDCACHED_ALL_ACCESS_UNIX_SOCKET} \ - -m 777 -P FLUSH,STATS,HELP -l unix:${RRDCACHED_LIMITED_ACCESS_UNIX_SOCKET} \ - -b ${RRDCACHED_BASE_DIR} -B" - - # Ideally, we'd use ${RRDCACHED_BIN}'s -s ${WEBSERVER_GROUP} option for - # this, but it doesn't take sometimes due to a lack of permissions, - # so perform the operation explicitly to be super-sure. - chgrp ${WEBSERVER_GROUP} ${RRDCACHED_ALL_ACCESS_UNIX_SOCKET}; - chgrp ${WEBSERVER_GROUP} ${RRDCACHED_LIMITED_ACCESS_UNIX_SOCKET}; - - # Check to make sure rrdcached actually started up. - for i in `seq 0 5`; do - rrdcachedRunningPid=`getRrdcachedRunningPid`; - if [ -n "${rrdcachedRunningPid}" ] - then - break; - fi - sleep 1; - done - - if [ -n "${rrdcachedRunningPid}" ] - then - echo "Started ${RRDCACHED_BIN} with PID ${rrdcachedRunningPid}"; - else - echo "Failed to start ${RRDCACHED_BIN}"; - exit 1; - fi -else - echo "${RRDCACHED_BIN} already running with PID ${rrdcachedRunningPid}"; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmetad.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmetad.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmetad.sh deleted file mode 100644 index 2764e0e..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmetad.sh +++ /dev/null @@ -1,43 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get all our common constants etc. set up. -source ./gmetadLib.sh; - -gmetadRunningPid=`getGmetadRunningPid`; - -# Only go ahead with the termination if we could find a running PID. -if [ -n "${gmetadRunningPid}" ] -then - kill -KILL ${gmetadRunningPid}; - echo "Stopped ${GMETAD_BIN} (with PID ${gmetadRunningPid})"; -fi - -# Poll again. -gmetadRunningPid=`getGmetadRunningPid`; - -# Once we've killed gmetad, there should no longer be a running PID. -if [ -z "${gmetadRunningPid}" ] -then - # It's safe to stop rrdcached now. - ./stopRrdcached.sh; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmond.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmond.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmond.sh deleted file mode 100644 index 1af3eb9..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopGmond.sh +++ /dev/null @@ -1,54 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get all our common constants etc. set up. -# Pulls in gangliaLib.sh as well, so we can skip pulling it in again. -source ./gmondLib.sh; - -function stopGmondForCluster() -{ - gmondClusterName=${1}; - - gmondRunningPid=`getGmondRunningPid ${gmondClusterName}`; - - # Only go ahead with the termination if we could find a running PID. - if [ -n "${gmondRunningPid}" ] - then - kill -KILL ${gmondRunningPid}; - echo "Stopped ${GMOND_BIN} for cluster ${gmondClusterName} (with PID ${gmondRunningPid})"; - fi -} - -# main() -gmondClusterName=${1}; - -if [ "x" == "x${gmondClusterName}" ] -then - # No ${gmondClusterName} passed in as command-line arg, so stop - # all the gmonds we know about. - for gmondClusterName in `getConfiguredGangliaClusterNames` - do - stopGmondForCluster ${gmondClusterName}; - done -else - stopGmondForCluster ${gmondClusterName}; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopRrdcached.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopRrdcached.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopRrdcached.sh deleted file mode 100644 index 0a0d8d8..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/stopRrdcached.sh +++ /dev/null @@ -1,41 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get all our common constants etc. set up. -source ./rrdcachedLib.sh; - -rrdcachedRunningPid=`getRrdcachedRunningPid`; - -# Only go ahead with the termination if we could find a running PID. -if [ -n "${rrdcachedRunningPid}" ] -then - kill -TERM ${rrdcachedRunningPid}; - # ${RRDCACHED_BIN} takes a few seconds to drain its buffers, so wait - # until we're sure it's well and truly dead. - # - # Without this, an immediately following startRrdcached.sh won't do - # anything, because it still sees this soon-to-die instance alive, - # and the net result is that after a few seconds, there's no - # ${RRDCACHED_BIN} running on the box anymore. - sleep 5; - echo "Stopped ${RRDCACHED_BIN} (with PID ${rrdcachedRunningPid})"; -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/teardownGanglia.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/teardownGanglia.sh b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/teardownGanglia.sh deleted file mode 100644 index b27f7a2..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/files/teardownGanglia.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -# Get access to Ganglia-wide constants, utilities etc. -source ./gangliaLib.sh; - -# Undo what we did while setting up Ganglia on this box. -rm -rf ${GANGLIA_CONF_DIR}; -rm -rf ${GANGLIA_RUNTIME_DIR}; http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia.py deleted file mode 100644 index 75626b1..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia.py +++ /dev/null @@ -1,97 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" - -from resource_management import * -import os - - -def groups_and_users(): - import params - -def config(): - import params - - shell_cmds_dir = params.ganglia_shell_cmds_dir - shell_files = ['checkGmond.sh', 'checkRrdcached.sh', 'gmetadLib.sh', - 'gmondLib.sh', 'rrdcachedLib.sh', - 'setupGanglia.sh', 'startGmetad.sh', 'startGmond.sh', - 'startRrdcached.sh', 'stopGmetad.sh', - 'stopGmond.sh', 'stopRrdcached.sh', 'teardownGanglia.sh'] - Directory(shell_cmds_dir, - owner="root", - group="root", - recursive=True - ) - init_file("gmetad") - init_file("gmond") - for sh_file in shell_files: - shell_file(sh_file) - for conf_file in ['gangliaClusters.conf', 'gangliaEnv.sh', 'gangliaLib.sh']: - ganglia_TemplateConfig(conf_file) - - -def init_file(name): - import params - - File("/etc/init.d/hdp-" + name, - content=StaticFile(name + ".init"), - mode=0755 - ) - - -def shell_file(name): - import params - - File(params.ganglia_shell_cmds_dir + os.sep + name, - content=StaticFile(name), - mode=0755 - ) - - -def ganglia_TemplateConfig(name, mode=755, tag=None): - import params - - TemplateConfig(format("{params.ganglia_shell_cmds_dir}/{name}"), - owner="root", - group="root", - template_tag=tag, - mode=mode - ) - - -def generate_daemon(ganglia_service, - name=None, - role=None, - owner=None, - group=None): - import params - - cmd = "" - if ganglia_service == "gmond": - if role == "server": - cmd = "{params.ganglia_shell_cmds_dir}/setupGanglia.sh -c {name} -m -o {owner} -g {group}" - else: - cmd = "{params.ganglia_shell_cmds_dir}/setupGanglia.sh -c {name} -o {owner} -g {group}" - elif ganglia_service == "gmetad": - cmd = "{params.ganglia_shell_cmds_dir}/setupGanglia.sh -t -o {owner} -g {group}" - else: - raise Fail("Unexpected ganglia service") - Execute(format(cmd), - path=[params.ganglia_shell_cmds_dir, "/usr/sbin", - "/sbin:/usr/local/bin", "/bin", "/usr/bin"] - ) http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor.py deleted file mode 100644 index 6ae004b..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor.py +++ /dev/null @@ -1,176 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" - -import sys -import os -from os import path -from resource_management import * -from ganglia import generate_daemon -import ganglia -import ganglia_monitor_service - - -class GangliaMonitor(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - self.config(env) - - def start(self, env): - ganglia_monitor_service.monitor("start") - - def stop(self, env): - ganglia_monitor_service.monitor("stop") - - - def status(self, env): - import status_params - pid_file_name = 'gmond.pid' - pid_file_count = 0 - pid_dir = status_params.pid_dir - # Recursively check all existing gmond pid files - for cur_dir, subdirs, files in os.walk(pid_dir): - for file_name in files: - if file_name == pid_file_name: - pid_file = os.path.join(cur_dir, file_name) - check_process_status(pid_file) - pid_file_count += 1 - if pid_file_count == 0: # If no any pid file is present - raise ComponentIsNotRunning() - - - def config(self, env): - import params - - ganglia.groups_and_users() - - Directory(params.ganglia_conf_dir, - owner="root", - group=params.user_group, - recursive=True - ) - - ganglia.config() - - if params.is_namenode_master: - generate_daemon("gmond", - name = "HDPNameNode", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_jtnode_master: - generate_daemon("gmond", - name = "HDPJobTracker", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_rmnode_master: - generate_daemon("gmond", - name = "HDPResourceManager", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_hsnode_master: - generate_daemon("gmond", - name = "HDPHistoryServer", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_hbase_master: - generate_daemon("gmond", - name = "HDPHBaseMaster", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_nodemanager: - generate_daemon("gmond", - name = "HDPNodeManager", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_hsnode_master: - generate_daemon("gmond", - name = "HDPHistoryServer", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_slave: - generate_daemon("gmond", - name = "HDPDataNode", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_tasktracker: - generate_daemon("gmond", - name = "HDPTaskTracker", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_hbase_rs: - generate_daemon("gmond", - name = "HDPHBaseRegionServer", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_flume: - generate_daemon("gmond", - name = "HDPFlumeServer", - role = "monitor", - owner = "root", - group = params.user_group) - - if params.is_jn_host: - generate_daemon("gmond", - name = "HDPJournalNode", - role = "monitor", - owner = "root", - group = params.user_group) - - Directory(path.join(params.ganglia_dir, "conf.d"), - owner="root", - group=params.user_group - ) - - File(path.join(params.ganglia_dir, "conf.d/modgstatus.conf"), - owner="root", - group=params.user_group - ) - File(path.join(params.ganglia_dir, "conf.d/multicpu.conf"), - owner="root", - group=params.user_group - ) - File(path.join(params.ganglia_dir, "gmond.conf"), - owner="root", - group=params.user_group - ) - - -if __name__ == "__main__": - GangliaMonitor().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor_service.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor_service.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor_service.py deleted file mode 100644 index d86d894..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_monitor_service.py +++ /dev/null @@ -1,31 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" - -from resource_management import * - - -def monitor(action=None):# 'start' or 'stop' - if action == "start": - Execute("chkconfig gmond off", - path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin', - ) - Execute( - format( - "service hdp-gmond {action} >> /tmp/gmond.log 2>&1 ; /bin/ps auwx | /bin/grep [g]mond >> /tmp/gmond.log 2>&1"), - path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin' - ) http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server.py deleted file mode 100644 index ab730de..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server.py +++ /dev/null @@ -1,197 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" - -import sys -import os -from os import path -from resource_management import * -from ganglia import generate_daemon -import ganglia -import ganglia_server_service - - -class GangliaServer(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - self.config(env) - - def start(self, env): - import params - - env.set_params(params) - ganglia_server_service.server("start") - - def stop(self, env): - import params - - env.set_params(params) - ganglia_server_service.server("stop") - - def status(self, env): - import status_params - env.set_params(status_params) - pid_file = format("{pid_dir}/gmetad.pid") - # Recursively check all existing gmetad pid files - check_process_status(pid_file) - - def config(self, env): - import params - - ganglia.groups_and_users() - ganglia.config() - - if params.has_namenodes: - generate_daemon("gmond", - name = "HDPNameNode", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_jobtracker: - generate_daemon("gmond", - name = "HDPJobTracker", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_hbase_masters: - generate_daemon("gmond", - name = "HDPHBaseMaster", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_resourcemanager: - generate_daemon("gmond", - name = "HDPResourceManager", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_nodemanager: - generate_daemon("gmond", - name = "HDPNodeManager", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_historyserver: - generate_daemon("gmond", - name = "HDPHistoryServer", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_slaves: - generate_daemon("gmond", - name = "HDPDataNode", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_tasktracker: - generate_daemon("gmond", - name = "HDPTaskTracker", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_hbase_rs: - generate_daemon("gmond", - name = "HDPHBaseRegionServer", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_flume: - generate_daemon("gmond", - name = "HDPFlumeServer", - role = "server", - owner = "root", - group = params.user_group) - - if params.has_journalnode: - generate_daemon("gmond", - name = "HDPJournalNode", - role = "server", - owner = "root", - group = params.user_group) - - generate_daemon("gmetad", - name = "gmetad", - role = "server", - owner = "root", - group = params.user_group) - - change_permission() - server_files() - File(path.join(params.ganglia_dir, "gmetad.conf"), - owner="root", - group=params.user_group - ) - - -def change_permission(): - import params - - Directory('/var/lib/ganglia/dwoo', - mode=0777, - owner=params.gmetad_user, - recursive=True - ) - - -def server_files(): - import params - - rrd_py_path = params.rrd_py_path - Directory(rrd_py_path, - recursive=True - ) - rrd_py_file_path = path.join(rrd_py_path, "rrd.py") - File(rrd_py_file_path, - content=StaticFile("rrd.py"), - mode=0755 - ) - rrd_file_owner = params.gmetad_user - if params.rrdcached_default_base_dir != params.rrdcached_base_dir: - Directory(params.rrdcached_base_dir, - owner=rrd_file_owner, - group=rrd_file_owner, - mode=0755, - recursive=True - ) - Directory(params.rrdcached_default_base_dir, - action = "delete" - ) - Link(params.rrdcached_default_base_dir, - to=params.rrdcached_base_dir - ) - elif rrd_file_owner != 'nobody': - Directory(params.rrdcached_default_base_dir, - owner=rrd_file_owner, - group=rrd_file_owner, - recursive=True - ) - - -if __name__ == "__main__": - GangliaServer().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server_service.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server_service.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server_service.py deleted file mode 100644 index b93e3f8..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/ganglia_server_service.py +++ /dev/null @@ -1,27 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" - -from resource_management import * - - -def server(action=None):# 'start' or 'stop' - command = "service hdp-gmetad {action} >> /tmp/gmetad.log 2>&1 ; /bin/ps auwx | /bin/grep [g]metad >> /tmp/gmetad.log 2>&1" - Execute(format(command), - path='/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin' - ) - MonitorWebserver("restart") http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/params.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/params.py deleted file mode 100644 index 32a7e4b..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/params.py +++ /dev/null @@ -1,80 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -""" - -from resource_management import * -from resource_management.core.system import System - -config = Script.get_config() - -user_group = config['configurations']['global']["user_group"] -ganglia_conf_dir = "/etc/ganglia/hdp" -ganglia_dir = "/etc/ganglia" -ganglia_runtime_dir = config['configurations']['global']["ganglia_runtime_dir"] -ganglia_shell_cmds_dir = "/usr/libexec/hdp/ganglia" - -gmetad_user = config['configurations']['global']["gmetad_user"] -gmond_user = config['configurations']['global']["gmond_user"] - -webserver_group = "apache" -rrdcached_default_base_dir = "/var/lib/ganglia/rrds" -rrdcached_base_dir = config['configurations']['global']["rrdcached_base_dir"] - -ganglia_server_host = config["clusterHostInfo"]["ganglia_server_host"][0] - -hostname = config["hostname"] -namenode_host = default("/clusterHostInfo/namenode_host", []) -jtnode_host = default("/clusterHostInfo/jtnode_host", []) -rm_host = default("/clusterHostInfo/rm_host", []) -hs_host = default("/clusterHostInfo/hs_host", []) -hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", []) -# datanodes are marked as slave_hosts -slave_hosts = default("/clusterHostInfo/slave_hosts", []) -tt_hosts = default("/clusterHostInfo/mapred_tt_hosts", []) -nm_hosts = default("/clusterHostInfo/nm_hosts", []) -hbase_rs_hosts = default("/clusterHostInfo/hbase_rs_hosts", []) -flume_hosts = default("/clusterHostInfo/flume_hosts", []) -jn_hosts = default("/clusterHostInfo/journalnode_hosts", []) - -is_namenode_master = hostname in namenode_host -is_jtnode_master = hostname in jtnode_host -is_rmnode_master = hostname in rm_host -is_hsnode_master = hostname in hs_host -is_hbase_master = hostname in hbase_master_hosts -is_slave = hostname in slave_hosts -is_tasktracker = hostname in tt_hosts -is_nodemanager = hostname in nm_hosts -is_hbase_rs = hostname in hbase_rs_hosts -is_flume = hostname in flume_hosts -is_jn_host = hostname in jn_hosts - -has_namenodes = not len(namenode_host) == 0 -has_jobtracker = not len(jtnode_host) == 0 -has_resourcemanager = not len(rm_host) == 0 -has_historyserver = not len(hs_host) == 0 -has_hbase_masters = not len(hbase_master_hosts) == 0 -has_slaves = not len(slave_hosts) == 0 -has_tasktracker = not len(tt_hosts) == 0 -has_nodemanager = not len(nm_hosts) == 0 -has_hbase_rs = not len(hbase_rs_hosts) == 0 -has_flume = not len(flume_hosts) == 0 -has_journalnode = not len(jn_hosts) == 0 - -if System.get_instance().platform == "suse": - rrd_py_path = '/srv/www/cgi-bin' -else: - rrd_py_path = '/var/www/cgi-bin' http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/status_params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/status_params.py b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/status_params.py deleted file mode 100644 index 3ccad2f..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/scripts/status_params.py +++ /dev/null @@ -1,25 +0,0 @@ -#!/usr/bin/env python -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * - -config = Script.get_config() - -pid_dir = config['configurations']['global']['ganglia_runtime_dir'] http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaClusters.conf.j2 ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaClusters.conf.j2 b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaClusters.conf.j2 deleted file mode 100644 index f3bb355..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaClusters.conf.j2 +++ /dev/null @@ -1,35 +0,0 @@ -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -######################################################### -### ClusterName GmondMasterHost GmondPort ### -######################################################### - - HDPJournalNode {{ganglia_server_host}} 8654 - HDPFlumeServer {{ganglia_server_host}} 8655 - HDPHBaseRegionServer {{ganglia_server_host}} 8656 - HDPNodeManager {{ganglia_server_host}} 8657 - HDPTaskTracker {{ganglia_server_host}} 8658 - HDPDataNode {{ganglia_server_host}} 8659 - HDPSlaves {{ganglia_server_host}} 8660 - HDPNameNode {{ganglia_server_host}} 8661 - HDPJobTracker {{ganglia_server_host}} 8662 - HDPHBaseMaster {{ganglia_server_host}} 8663 - HDPResourceManager {{ganglia_server_host}} 8664 - HDPHistoryServer {{ganglia_server_host}} 8666 - http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaEnv.sh.j2 ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaEnv.sh.j2 b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaEnv.sh.j2 deleted file mode 100644 index 1ead550..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaEnv.sh.j2 +++ /dev/null @@ -1,24 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -# Unix users and groups for the binaries we start up. -GMETAD_USER={{gmetad_user}}; -GMOND_USER={{gmond_user}}; -WEBSERVER_GROUP={{webserver_group}}; http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaLib.sh.j2 ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaLib.sh.j2 b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaLib.sh.j2 deleted file mode 100644 index 4b5bdd1..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/GANGLIA/package/templates/gangliaLib.sh.j2 +++ /dev/null @@ -1,62 +0,0 @@ -#!/bin/sh - -#/* -# * Licensed to the Apache Software Foundation (ASF) under one -# * or more contributor license agreements. See the NOTICE file -# * distributed with this work for additional information -# * regarding copyright ownership. The ASF licenses this file -# * to you under the Apache License, Version 2.0 (the -# * "License"); you may not use this file except in compliance -# * with the License. You may obtain a copy of the License at -# * -# * http://www.apache.org/licenses/LICENSE-2.0 -# * -# * Unless required by applicable law or agreed to in writing, software -# * distributed under the License is distributed on an "AS IS" BASIS, -# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# * See the License for the specific language governing permissions and -# * limitations under the License. -# */ - -cd `dirname ${0}`; - -GANGLIA_CONF_DIR={{ganglia_conf_dir}}; -GANGLIA_RUNTIME_DIR={{ganglia_runtime_dir}}; -RRDCACHED_BASE_DIR={{rrdcached_base_dir}}; - -# This file contains all the info about each Ganglia Cluster in our Grid. -GANGLIA_CLUSTERS_CONF_FILE=./gangliaClusters.conf; - -function createDirectory() -{ - directoryPath=${1}; - - if [ "x" != "x${directoryPath}" ] - then - mkdir -p ${directoryPath}; - fi -} - -function getGangliaClusterInfo() -{ - clusterName=${1}; - - if [ "x" != "x${clusterName}" ] - then - # Fetch the particular entry for ${clusterName} from ${GANGLIA_CLUSTERS_CONF_FILE}. - awk -v clusterName=${clusterName} '($1 !~ /^#/) && ($1 == clusterName)' ${GANGLIA_CLUSTERS_CONF_FILE}; - else - # Spit out all the non-comment, non-empty lines from ${GANGLIA_CLUSTERS_CONF_FILE}. - awk '($1 !~ /^#/) && (NF)' ${GANGLIA_CLUSTERS_CONF_FILE}; - fi -} - -function getConfiguredGangliaClusterNames() -{ - # Find all the subdirectories in ${GANGLIA_CONF_DIR} and extract only - # the subdirectory name from each. - if [ -e ${GANGLIA_CONF_DIR} ] - then - find ${GANGLIA_CONF_DIR} -maxdepth 1 -mindepth 1 -type d | xargs -n1 basename; - fi -} http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/global.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/global.xml b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/global.xml deleted file mode 100644 index b2c57bd..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/global.xml +++ /dev/null @@ -1,160 +0,0 @@ - - - - - - - hbasemaster_host - - HBase Master Host. - - - regionserver_hosts - - Region Server Hosts - - - hbase_log_dir - /var/log/hbase - Log Directories for HBase. - - - hbase_pid_dir - /var/run/hbase - Log Directories for HBase. - - - hbase_log_dir - /var/log/hbase - Log Directories for HBase. - - - hbase_regionserver_heapsize - 1024 - Log Directories for HBase. - - - hbase_master_heapsize - 1024 - HBase Master Heap Size - - - hstore_compactionthreshold - 3 - HBase HStore compaction threshold. - - - hfile_blockcache_size - 0.40 - HFile block cache size. - - - hstorefile_maxsize - 10737418240 - Maximum HStoreFile Size - - - regionserver_handlers - 60 - HBase RegionServer Handler - - - hregion_majorcompaction - 604800000 - The time between major compactions of all HStoreFiles in a region. Set to 0 to disable automated major compactions. - - - hregion_blockmultiplier - 2 - HBase Region Block Multiplier - - - hregion_memstoreflushsize - - HBase Region MemStore Flush Size. - - - client_scannercaching - 100 - Base Client Scanner Caching - - - zookeeper_sessiontimeout - 30000 - ZooKeeper Session Timeout - - - hfile_max_keyvalue_size - 10485760 - HBase Client Maximum key-value Size - - - hbase_hdfs_root_dir - /apps/hbase/data - HBase Relative Path to HDFS. - - - hbase_conf_dir - /etc/hbase - Config Directory for HBase. - - - hdfs_enable_shortcircuit_read - true - HDFS Short Circuit Read - - - hdfs_support_append - true - HDFS append support - - - hstore_blockingstorefiles - 10 - HStore blocking storefiles. - - - regionserver_memstore_lab - true - Region Server memstore. - - - regionserver_memstore_lowerlimit - 0.38 - Region Server memstore lower limit. - - - regionserver_memstore_upperlimit - 0.4 - Region Server memstore upper limit. - - - hbase_conf_dir - /etc/hbase - HBase conf dir. - - - hbase_user - hbase - HBase User Name. - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-policy.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-policy.xml b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-policy.xml deleted file mode 100644 index e45f23c..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-policy.xml +++ /dev/null @@ -1,53 +0,0 @@ - - - - - - - security.client.protocol.acl - * - ACL for HRegionInterface protocol implementations (ie. - clients talking to HRegionServers) - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.admin.protocol.acl - * - ACL for HMasterInterface protocol implementation (ie. - clients talking to HMaster for admin operations). - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.masterregion.protocol.acl - * - ACL for HMasterRegionInterface protocol implementations - (for HRegionServers communicating with HMaster) - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml deleted file mode 100644 index bf4af7d..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml +++ /dev/null @@ -1,356 +0,0 @@ - - - - - - hbase.rootdir - hdfs://localhost:8020/apps/hbase/data - The directory shared by region servers and into - which HBase persists. The URL should be 'fully-qualified' - to include the filesystem scheme. For example, to specify the - HDFS directory '/hbase' where the HDFS instance's namenode is - running at namenode.example.org on port 9000, set this value to: - hdfs://namenode.example.org:9000/hbase. By default HBase writes - into /tmp. Change this configuration else all data will be lost - on machine restart. - - - - hbase.cluster.distributed - true - The mode the cluster will be in. Possible values are - false for standalone mode and true for distributed mode. If - false, startup will run all HBase and ZooKeeper daemons together - in the one JVM. - - - - hbase.tmp.dir - /hadoop/hbase - Temporary directory on the local filesystem. - Change this setting to point to a location more permanent - than '/tmp' (The '/tmp' directory is often cleared on - machine restart). - - - - hbase.master.info.bindAddress - - The bind address for the HBase Master web UI - - - - hbase.master.info.port - - The port for the HBase Master web UI. - - - hbase.regionserver.info.port - - The port for the HBase RegionServer web UI. - - - hbase.regionserver.global.memstore.upperLimit - 0.4 - Maximum size of all memstores in a region server before new - updates are blocked and flushes are forced. Defaults to 40% of heap - - - - hbase.regionserver.handler.count - 60 - Count of RPC Listener instances spun up on RegionServers. - Same property is used by the Master for count of master handlers. - Default is 10. - - - - hbase.hregion.majorcompaction - 86400000 - The time (in milliseconds) between 'major' compactions of all - HStoreFiles in a region. Default: 1 day. - Set to 0 to disable automated major compactions. - - - - - hbase.regionserver.global.memstore.lowerLimit - 0.38 - When memstores are being forced to flush to make room in - memory, keep flushing until we hit this mark. Defaults to 35% of heap. - This value equal to hbase.regionserver.global.memstore.upperLimit causes - the minimum possible flushing to occur when updates are blocked due to - memstore limiting. - - - - hbase.hregion.memstore.block.multiplier - 2 - Block updates if memstore has hbase.hregion.memstore.block.multiplier - time hbase.hregion.flush.size bytes. Useful preventing - runaway memstore during spikes in update traffic. Without an - upper-bound, memstore fills such that when it flushes the - resultant flush files take a long time to compact or split, or - worse, we OOME - - - - hbase.hregion.memstore.flush.size - 134217728 - - Memstore will be flushed to disk if size of the memstore - exceeds this number of bytes. Value is checked by a thread that runs - every hbase.server.thread.wakefrequency. - - - - hbase.hregion.memstore.mslab.enabled - true - - Enables the MemStore-Local Allocation Buffer, - a feature which works to prevent heap fragmentation under - heavy write loads. This can reduce the frequency of stop-the-world - GC pauses on large heaps. - - - - hbase.hregion.max.filesize - 10737418240 - - Maximum HStoreFile size. If any one of a column families' HStoreFiles has - grown to exceed this value, the hosting HRegion is split in two. - Default: 1G. - - - - hbase.client.scanner.caching - 100 - Number of rows that will be fetched when calling next - on a scanner if it is not served from (local, client) memory. Higher - caching values will enable faster scanners but will eat up more memory - and some calls of next may take longer and longer times when the cache is empty. - Do not set this value such that the time between invocations is greater - than the scanner timeout; i.e. hbase.regionserver.lease.period - - - - zookeeper.session.timeout - 30000 - ZooKeeper session timeout. - HBase passes this to the zk quorum as suggested maximum time for a - session (This setting becomes zookeeper's 'maxSessionTimeout'). See - http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions - "The client sends a requested timeout, the server responds with the - timeout that it can give the client. " In milliseconds. - - - - hbase.client.keyvalue.maxsize - 10485760 - Specifies the combined maximum allowed size of a KeyValue - instance. This is to set an upper boundary for a single entry saved in a - storage file. Since they cannot be split it helps avoiding that a region - cannot be split any further because the data is too large. It seems wise - to set this to a fraction of the maximum region size. Setting it to zero - or less disables the check. - - - - hbase.hstore.compactionThreshold - 3 - - If more than this number of HStoreFiles in any one HStore - (one HStoreFile is written per flush of memstore) then a compaction - is run to rewrite all HStoreFiles files as one. Larger numbers - put off compaction but when it runs, it takes longer to complete. - - - - hbase.hstore.flush.retries.number - 120 - - The number of times the region flush operation will be retried. - - - - - hbase.hstore.blockingStoreFiles - 10 - - If more than this number of StoreFiles in any one Store - (one StoreFile is written per flush of MemStore) then updates are - blocked for this HRegion until a compaction is completed, or - until hbase.hstore.blockingWaitTime has been exceeded. - - - - hfile.block.cache.size - 0.40 - - Percentage of maximum heap (-Xmx setting) to allocate to block cache - used by HFile/StoreFile. Default of 0.25 means allocate 25%. - Set to 0 to disable but it's not recommended. - - - - - - hbase.master.keytab.file - - Full path to the kerberos keytab file to use for logging in - the configured HMaster server principal. - - - - hbase.master.kerberos.principal - - Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name - that should be used to run the HMaster process. The principal name should - be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname - portion, it will be replaced with the actual hostname of the running - instance. - - - - hbase.regionserver.keytab.file - - Full path to the kerberos keytab file to use for logging in - the configured HRegionServer server principal. - - - - hbase.regionserver.kerberos.principal - - Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name - that should be used to run the HRegionServer process. The principal name - should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the - hostname portion, it will be replaced with the actual hostname of the - running instance. An entry for this principal must exist in the file - specified in hbase.regionserver.keytab.file - - - - - - hbase.superuser - hbase - List of users or groups (comma-separated), who are allowed - full privileges, regardless of stored ACLs, across the cluster. - Only used when HBase security is enabled. - - - - - hbase.security.authentication - simple - - - - hbase.security.authorization - false - Enables HBase authorization. Set the value of this property to false to disable HBase authorization. - - - - - hbase.coprocessor.region.classes - - A comma-separated list of Coprocessors that are loaded by - default on all tables. For any override coprocessor method, these classes - will be called in order. After implementing your own Coprocessor, just put - it in HBase's classpath and add the fully qualified class name here. - A coprocessor can also be loaded on demand by setting HTableDescriptor. - - - - - hbase.coprocessor.master.classes - - A comma-separated list of - org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are - loaded by default on the active HMaster process. For any implemented - coprocessor methods, the listed classes will be called in order. After - implementing your own MasterObserver, just put it in HBase's classpath - and add the fully qualified class name here. - - - - - hbase.zookeeper.property.clientPort - 2181 - Property from ZooKeeper's config zoo.cfg. - The port at which the clients will connect. - - - - - - hbase.zookeeper.quorum - localhost - Comma separated list of servers in the ZooKeeper Quorum. - For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". - By default this is set to localhost for local and pseudo-distributed modes - of operation. For a fully-distributed setup, this should be set to a full - list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh - this is the list of servers which we will start/stop ZooKeeper on. - - - - - - hbase.zookeeper.useMulti - true - Instructs HBase to make use of ZooKeeper's multi-update functionality. - This allows certain ZooKeeper operations to complete more quickly and prevents some issues - with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).ยท - IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+ - and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will - not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495). - - - - zookeeper.znode.parent - /hbase-unsecure - Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper - files that are configured with a relative path will go under this node. - By default, all of HBase's ZooKeeper file path are configured with a - relative path, so they will all go under this directory unless changed. - - - - - hbase.defaults.for.version.skip - true - Disables version verification. - - - - dfs.domain.socket.path - /var/lib/hadoop-hdfs/dn_socket - Path to domain socket. - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ae534ed3/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/metainfo.xml deleted file mode 100644 index 7227b6e..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.8/services/HBASE/metainfo.xml +++ /dev/null @@ -1,93 +0,0 @@ - - - - 2.0 - - - HBASE - Non-relational distributed database and centralized service for configuration management & - synchronization - - 0.96.0.2.0.6.0 - - - HBASE_MASTER - MASTER - - - PYTHON - 600 - - - - DECOMMISSION - - - PYTHON - 600 - - - - - - - HBASE_REGIONSERVER - SLAVE - - - PYTHON - - - - - HBASE_CLIENT - CLIENT - - - PYTHON - - - - - - - centos6 - - - rpm - hbase - - - - - - - - PYTHON - 300 - - - - global - hbase-policy - hbase-site - - - - -