Return-Path: X-Original-To: apmail-ambari-commits-archive@www.apache.org Delivered-To: apmail-ambari-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2300410D7A for ; Mon, 23 Dec 2013 21:18:31 +0000 (UTC) Received: (qmail 67158 invoked by uid 500); 23 Dec 2013 21:18:30 -0000 Delivered-To: apmail-ambari-commits-archive@ambari.apache.org Received: (qmail 67062 invoked by uid 500); 23 Dec 2013 21:18:30 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 66798 invoked by uid 99); 23 Dec 2013 21:18:29 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 23 Dec 2013 21:18:29 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 78E4488F78E; Mon, 23 Dec 2013 21:18:29 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: dmitriusan@apache.org To: commits@ambari.apache.org Date: Mon, 23 Dec 2013 21:18:37 -0000 Message-Id: <387b0a08f2914829a7b4b88d5bf6b3df@git.apache.org> In-Reply-To: <4c22abcbbe9845f6926288922a8ce92f@git.apache.org> References: <4c22abcbbe9845f6926288922a8ce92f@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [09/29] AMBARI-4159. Enable 2.0.8 stack (with python services support) (dlysnichenko) http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkForFormat.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkForFormat.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkForFormat.sh deleted file mode 100644 index d14091a..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkForFormat.sh +++ /dev/null @@ -1,62 +0,0 @@ -#!/bin/sh -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# - -export hdfs_user=$1 -shift -export conf_dir=$1 -shift -export mark_dir=$1 -shift -export name_dirs=$* - -export EXIT_CODE=0 -export command="namenode -format" -export list_of_non_empty_dirs="" - -mark_file=/var/run/hadoop/hdfs/namenode-formatted -if [[ -f ${mark_file} ]] ; then - rm -f ${mark_file} - mkdir -p ${mark_dir} -fi - -if [[ ! -d $mark_dir ]] ; then - for dir in `echo $name_dirs | tr ',' ' '` ; do - echo "NameNode Dirname = $dir" - cmd="ls $dir | wc -l | grep -q ^0$" - eval $cmd - if [[ $? -ne 0 ]] ; then - (( EXIT_CODE = $EXIT_CODE + 1 )) - list_of_non_empty_dirs="$list_of_non_empty_dirs $dir" - fi - done - - if [[ $EXIT_CODE == 0 ]] ; then - su - ${hdfs_user} -c "yes Y | hadoop --config ${conf_dir} ${command}" - else - echo "ERROR: Namenode directory(s) is non empty. Will not format the namenode. List of non-empty namenode dirs ${list_of_non_empty_dirs}" - fi -else - echo "${mark_dir} exists. Namenode DFS already formatted" -fi - -exit $EXIT_CODE - http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkWebUI.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkWebUI.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkWebUI.py deleted file mode 100644 index f8e9c1a..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/files/checkWebUI.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python - -''' -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -''' - -import optparse -import httplib - -# -# Main. -# -def main(): - parser = optparse.OptionParser(usage="usage: %prog [options] component ") - parser.add_option("-m", "--hosts", dest="hosts", help="Comma separated hosts list for WEB UI to check it availability") - parser.add_option("-p", "--port", dest="port", help="Port of WEB UI to check it availability") - - (options, args) = parser.parse_args() - - hosts = options.hosts.split(',') - port = options.port - - for host in hosts: - try: - conn = httplib.HTTPConnection(host, port) - # This can be modified to get a partial url part to be sent with request - conn.request("GET", "/") - httpCode = conn.getresponse().status - conn.close() - except Exception: - httpCode = 404 - - if httpCode != 200: - print "Cannot access WEB UI on: http://" + host + ":" + port - exit(1) - - -if __name__ == "__main__": - main() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/datanode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/datanode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/datanode.py deleted file mode 100644 index eaa27cf..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/datanode.py +++ /dev/null @@ -1,57 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from hdfs_datanode import datanode - - -class DataNode(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - - def start(self, env): - import params - - env.set_params(params) - self.config(env) - datanode(action="start") - - def stop(self, env): - import params - - env.set_params(params) - datanode(action="stop") - - def config(self, env): - import params - - datanode(action="configure") - - def status(self, env): - import status_params - - env.set_params(status_params) - check_process_status(status_params.datanode_pid_file) - - -if __name__ == "__main__": - DataNode().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_client.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_client.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_client.py deleted file mode 100644 index 6babde5..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_client.py +++ /dev/null @@ -1,49 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from utils import service - - -class HdfsClient(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - self.config(env) - - def start(self, env): - import params - - env.set_params(params) - - def stop(self, env): - import params - - env.set_params(params) - - def config(self, env): - import params - - pass - - -if __name__ == "__main__": - HdfsClient().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_datanode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_datanode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_datanode.py deleted file mode 100644 index e0b6c39..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_datanode.py +++ /dev/null @@ -1,56 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from utils import service - - -def datanode(action=None): - import params - - if action == "configure": - Directory(params.dfs_domain_socket_dir, - recursive=True, - mode=0750, - owner=params.hdfs_user, - group=params.user_group) - Directory(params.dfs_data_dir, - recursive=True, - mode=0755, - owner=params.hdfs_user, - group=params.user_group) - - if action == "start": - service( - action=action, name="datanode", - user=params.hdfs_user, - create_pid_dir=True, - create_log_dir=True, - keytab=params.dfs_datanode_keytab_file, - principal=params.dfs_datanode_kerberos_principal - ) - if action == "stop": - service( - action=action, name="datanode", - user=params.hdfs_user, - create_pid_dir=True, - create_log_dir=True, - keytab=params.dfs_datanode_keytab_file, - principal=params.dfs_datanode_kerberos_principal - ) http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_namenode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_namenode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_namenode.py deleted file mode 100644 index a4772cb..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_namenode.py +++ /dev/null @@ -1,177 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from utils import service -from utils import hdfs_directory -import urlparse - - -def namenode(action=None, format=True): - import params - - if action == "configure": - create_name_dirs(params.dfs_name_dir) - - if action == "start": - if format: - format_namenode() - pass - service( - action="start", name="namenode", user=params.hdfs_user, - keytab=params.dfs_namenode_keytab_file, - principal=params.dfs_namenode_kerberos_principal - ) - - # TODO: extract creating of dirs to different services - create_app_directories() - create_user_directories() - - if action == "stop": - service( - action="stop", name="namenode", user=params.hdfs_user, - keytab=params.dfs_namenode_keytab_file, - principal=params.dfs_namenode_kerberos_principal - ) - - -def create_name_dirs(directories): - import params - - dirs = directories.split(",") - Directory(dirs, - mode=0755, - owner=params.hdfs_user, - group=params.user_group, - recursive=True - ) - - -def create_app_directories(): - import params - - hdfs_directory(name="/tmp", - owner=params.hdfs_user, - mode="777" - ) - #mapred directories - if params.has_histroryserver: - hdfs_directory(name="/mapred", - owner=params.mapred_user - ) - hdfs_directory(name="/mapred/system", - owner=params.hdfs_user - ) - #hbase directories - if len(params.hbase_master_hosts) != 0: - hdfs_directory(name=params.hbase_hdfs_root_dir, - owner=params.hbase_user - ) - hdfs_directory(name=params.hbase_staging_dir, - owner=params.hbase_user, - mode="711" - ) - #hive directories - if len(params.hive_server_host) != 0: - hdfs_directory(name=params.hive_apps_whs_dir, - owner=params.hive_user, - mode="777" - ) - if len(params.hcat_server_hosts) != 0: - hdfs_directory(name=params.webhcat_apps_dir, - owner=params.webhcat_user, - mode="755" - ) - if len(params.hs_host) != 0: - if params.yarn_log_aggregation_enabled: - hdfs_directory(name=params.yarn_nm_app_log_dir, - owner=params.yarn_user, - group=params.user_group, - mode="1777", - recursive_chmod=True - ) - hdfs_directory(name=params.mapreduce_jobhistory_intermediate_done_dir, - owner=params.mapred_user, - group=params.user_group, - mode="1777" - ) - - hdfs_directory(name=params.mapreduce_jobhistory_done_dir, - owner=params.mapred_user, - group=params.user_group, - mode="1777" - ) - - pass - - -def create_user_directories(): - import params - - hdfs_directory(name=params.smoke_hdfs_user_dir, - owner=params.smoke_user, - mode=params.smoke_hdfs_user_mode - ) - - if params.has_hive_server_host: - hdfs_directory(name=params.hive_hdfs_user_dir, - owner=params.hive_user, - mode=params.hive_hdfs_user_mode - ) - - if params.has_hcat_server_host: - if params.hcat_hdfs_user_dir != params.webhcat_hdfs_user_dir: - hdfs_directory(name=params.hcat_hdfs_user_dir, - owner=params.hcat_user, - mode=params.hcat_hdfs_user_mode - ) - hdfs_directory(name=params.webhcat_hdfs_user_dir, - owner=params.webhcat_user, - mode=params.webhcat_hdfs_user_mode - ) - - if params.has_oozie_server: - hdfs_directory(name=params.oozie_hdfs_user_dir, - owner=params.oozie_user, - mode=params.oozie_hdfs_user_mode - ) - - -def format_namenode(force=None): - import params - - mark_dir = params.namenode_formatted_mark_dir - dfs_name_dir = params.dfs_name_dir - hdfs_user = params.hdfs_user - hadoop_conf_dir = params.hadoop_conf_dir - - if not params.dfs_ha_enabled: - if force: - ExecuteHadoop('namenode -format', - kinit_override=True) - else: - File('/tmp/checkForFormat.sh', - content=StaticFile("checkForFormat.sh"), - mode=0755) - Execute(format( - "sh /tmp/checkForFormat.sh {hdfs_user} {hadoop_conf_dir} {mark_dir} " - "{dfs_name_dir}"), - not_if=format("test -d {mark_dir}"), - path="/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin") - Execute(format("mkdir -p {mark_dir}")) http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_snamenode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_snamenode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_snamenode.py deleted file mode 100644 index a943455..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/hdfs_snamenode.py +++ /dev/null @@ -1,53 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from utils import service -from utils import hdfs_directory - - -def snamenode(action=None, format=False): - import params - - if action == "configure": - Directory(params.fs_checkpoint_dir, - recursive=True, - mode=0755, - owner=params.hdfs_user, - group=params.user_group) - elif action == "start": - service( - action=action, - name="secondarynamenode", - user=params.hdfs_user, - create_pid_dir=True, - create_log_dir=True, - keytab=params.dfs_secondary_namenode_keytab_file, - principal=params.dfs_secondary_namenode_kerberos_principal - ) - elif action == "stop": - service( - action=action, - name="secondarynamenode", - user=params.hdfs_user, - create_pid_dir=True, - create_log_dir=True, - keytab=params.dfs_secondary_namenode_keytab_file, - principal=params.dfs_secondary_namenode_kerberos_principal - ) http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/journalnode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/journalnode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/journalnode.py deleted file mode 100644 index fd355cc..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/journalnode.py +++ /dev/null @@ -1,74 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from utils import service - - -class JournalNode(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - - def start(self, env): - import params - - env.set_params(params) - self.config(env) - service( - action="start", name="journalnode", user=params.hdfs_user, - create_pid_dir=True, - create_log_dir=True, - keytab=params.dfs_journalnode_keytab_file, - principal=params.dfs_journalnode_kerberos_principal - ) - - def stop(self, env): - import params - - env.set_params(params) - service( - action="stop", name="journalnode", user=params.hdfs_user, - create_pid_dir=True, - create_log_dir=True, - keytab=params.dfs_journalnode_keytab_file, - principal=params.dfs_journalnode_kerberos_principal - ) - - def config(self, env): - import params - - Directory(params.jn_edits_dir, - recursive=True, - owner=params.hdfs_user, - group=params.user_group - ) - pass - - def status(self, env): - import status_params - - env.set_params(status_params) - check_process_status(status_params.journalnode_pid_file) - - -if __name__ == "__main__": - JournalNode().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/namenode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/namenode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/namenode.py deleted file mode 100644 index 9b0fe43..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/namenode.py +++ /dev/null @@ -1,59 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from hdfs_namenode import namenode - - -class NameNode(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - - def start(self, env): - import params - - env.set_params(params) - self.config(env) - namenode(action="start") - - def stop(self, env): - import params - - env.set_params(params) - namenode(action="stop") - - def config(self, env): - import params - - env.set_params(params) - namenode(action="configure") - pass - - def status(self, env): - import status_params - - env.set_params(status_params) - check_process_status(status_params.namenode_pid_file) - - -if __name__ == "__main__": - NameNode().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/params.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/params.py deleted file mode 100644 index a51134b..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/params.py +++ /dev/null @@ -1,183 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -import status_params -import os - -config = Script.get_config() - -#security params -security_enabled = config['configurations']['global']['security_enabled'] -dfs_journalnode_keytab_file = config['configurations']['hdfs-site']['dfs.journalnode.keytab.file'] -dfs_web_authentication_kerberos_keytab = config['configurations']['hdfs-site']['dfs.journalnode.keytab.file'] -dfs_secondary_namenode_keytab_file = config['configurations']['hdfs-site']['dfs.secondary.namenode.keytab.file'] -dfs_datanode_keytab_file = config['configurations']['hdfs-site']['dfs.datanode.keytab.file'] -dfs_namenode_keytab_file = config['configurations']['hdfs-site']['dfs.namenode.keytab.file'] -smoke_user_keytab = config['configurations']['global']['smokeuser_keytab'] -hdfs_user_keytab = config['configurations']['global']['hdfs_user_keytab'] - -dfs_datanode_kerberos_principal = config['configurations']['hdfs-site']['dfs.datanode.kerberos.principal'] -dfs_journalnode_kerberos_principal = config['configurations']['hdfs-site']['dfs.journalnode.kerberos.principal'] -dfs_secondary_namenode_kerberos_internal_spnego_principal = config['configurations']['hdfs-site']['dfs.secondary.namenode.kerberos.internal.spnego.principal'] -dfs_namenode_kerberos_principal = config['configurations']['hdfs-site']['dfs.namenode.kerberos.principal'] -dfs_web_authentication_kerberos_principal = config['configurations']['hdfs-site']['dfs.web.authentication.kerberos.principal'] -dfs_secondary_namenode_kerberos_principal = config['configurations']['hdfs-site']['dfs.secondary.namenode.kerberos.principal'] -dfs_journalnode_kerberos_internal_spnego_principal = config['configurations']['hdfs-site']['dfs.journalnode.kerberos.internal.spnego.principal'] - -kinit_path_local = get_kinit_path([default("kinit_path_local",None), "/usr/bin", "/usr/kerberos/bin", "/usr/sbin"]) -#hosts -hostname = config["hostname"] -rm_host = default("/clusterHostInfo/rm_host", []) -slave_hosts = default("/clusterHostInfo/slave_hosts", []) -hagios_server_hosts = default("/clusterHostInfo/nagios_server_host", []) -oozie_servers = default("/clusterHostInfo/oozie_server", []) -hcat_server_hosts = default("/clusterHostInfo/webhcat_server_host", []) -hive_server_host = default("/clusterHostInfo/hive_server_host", []) -hbase_master_hosts = default("/clusterHostInfo/hbase_master_hosts", []) -hs_host = default("/clusterHostInfo/hs_host", []) -jtnode_host = default("/clusterHostInfo/jtnode_host", []) -namenode_host = default("/clusterHostInfo/namenode_host", []) -nm_host = default("/clusterHostInfo/nm_host", []) -ganglia_server_hosts = default("/clusterHostInfo/ganglia_server_host", []) -journalnode_hosts = default("/clusterHostInfo/journalnode_hosts", []) -zkfc_hosts = default("/clusterHostInfo/zkfc_hosts", []) - -has_ganglia_server = not len(ganglia_server_hosts) == 0 -has_namenodes = not len(namenode_host) == 0 -has_jobtracker = not len(jtnode_host) == 0 -has_resourcemanager = not len(rm_host) == 0 -has_histroryserver = not len(hs_host) == 0 -has_hbase_masters = not len(hbase_master_hosts) == 0 -has_slaves = not len(slave_hosts) == 0 -has_nagios = not len(hagios_server_hosts) == 0 -has_oozie_server = not len(oozie_servers) == 0 -has_hcat_server_host = not len(hcat_server_hosts) == 0 -has_hive_server_host = not len(hive_server_host) == 0 -has_journalnode_hosts = not len(journalnode_hosts) == 0 -has_zkfc_hosts = not len(zkfc_hosts) == 0 - - -is_namenode_master = hostname in namenode_host -is_jtnode_master = hostname in jtnode_host -is_rmnode_master = hostname in rm_host -is_hsnode_master = hostname in hs_host -is_hbase_master = hostname in hbase_master_hosts -is_slave = hostname in slave_hosts - -if has_ganglia_server: - ganglia_server_host = ganglia_server_hosts[0] - -#users and groups -yarn_user = config['configurations']['global']['yarn_user'] -hbase_user = config['configurations']['global']['hbase_user'] -nagios_user = config['configurations']['global']['nagios_user'] -oozie_user = config['configurations']['global']['oozie_user'] -webhcat_user = config['configurations']['global']['hcat_user'] -hcat_user = config['configurations']['global']['hcat_user'] -hive_user = config['configurations']['global']['hive_user'] -smoke_user = config['configurations']['global']['smokeuser'] -mapred_user = config['configurations']['global']['mapred_user'] -hdfs_user = status_params.hdfs_user - -user_group = config['configurations']['global']['user_group'] -proxyuser_group = config['configurations']['global']['proxyuser_group'] -nagios_group = config['configurations']['global']['nagios_group'] -smoke_user_group = "users" - -#hadoop params -hadoop_conf_dir = "/etc/hadoop/conf" -hadoop_pid_dir_prefix = status_params.hadoop_pid_dir_prefix -hadoop_bin = "/usr/lib/hadoop/sbin" - -hdfs_log_dir_prefix = config['configurations']['global']['hdfs_log_dir_prefix'] - -dfs_domain_socket_path = config['configurations']['hdfs-site']['dfs.domain.socket.path'] -dfs_domain_socket_dir = os.path.dirname(dfs_domain_socket_path) - -hadoop_libexec_dir = "/usr/lib/hadoop/libexec" - -jn_edits_dir = config['configurations']['hdfs-site']['dfs.journalnode.edits.dir']#"/grid/0/hdfs/journal" - -# if stack_version[0] == "2": -dfs_name_dir = config['configurations']['hdfs-site']['dfs.namenode.name.dir'] -# else: -# dfs_name_dir = default("/configurations/hdfs-site/dfs.name.dir","/tmp/hadoop-hdfs/dfs/name") - -namenode_dirs_created_stub_dir = format("{hdfs_log_dir_prefix}/{hdfs_user}") -namenode_dirs_stub_filename = "namenode_dirs_created" - -hbase_hdfs_root_dir = config['configurations']['hbase-site']['hbase.rootdir']#","/apps/hbase/data") -hbase_staging_dir = "/apps/hbase/staging" -hive_apps_whs_dir = config['configurations']['hive-site']["hive.metastore.warehouse.dir"] #, "/apps/hive/warehouse") -webhcat_apps_dir = "/apps/webhcat" -yarn_log_aggregation_enabled = config['configurations']['yarn-site']['yarn.log-aggregation-enable']#","true") -yarn_nm_app_log_dir = config['configurations']['yarn-site']['yarn.nodemanager.remote-app-log-dir']#","/app-logs") -mapreduce_jobhistory_intermediate_done_dir = config['configurations']['mapred-site']['mapreduce.jobhistory.intermediate-done-dir']#","/app-logs") -mapreduce_jobhistory_done_dir = config['configurations']['mapred-site']['mapreduce.jobhistory.done-dir']#","/mr-history/done") - -if has_oozie_server: - oozie_hdfs_user_dir = format("/user/{oozie_user}") - oozie_hdfs_user_mode = 775 -if has_hcat_server_host: - hcat_hdfs_user_dir = format("/user/{hcat_user}") - hcat_hdfs_user_mode = 755 - webhcat_hdfs_user_dir = format("/user/{webhcat_user}") - webhcat_hdfs_user_mode = 755 -if has_hive_server_host: - hive_hdfs_user_dir = format("/user/{hive_user}") - hive_hdfs_user_mode = 700 -smoke_hdfs_user_dir = format("/user/{smoke_user}") -smoke_hdfs_user_mode = 770 - -namenode_formatted_mark_dir = format("{hadoop_pid_dir_prefix}/hdfs/namenode/formatted/") - -# if stack_version[0] == "2": -fs_checkpoint_dir = config['configurations']['hdfs-site']['dfs.namenode.checkpoint.dir'] #","/tmp/hadoop-hdfs/dfs/namesecondary") -# else: -# fs_checkpoint_dir = default("/configurations/core-site/fs.checkpoint.dir","/tmp/hadoop-hdfs/dfs/namesecondary") - -# if stack_version[0] == "2": -dfs_data_dir = config['configurations']['hdfs-site']['dfs.datanode.data.dir']#,"/tmp/hadoop-hdfs/dfs/data") -# else: -# dfs_data_dir = default('/configurations/hdfs-site/dfs.data.dir',"/tmp/hadoop-hdfs/dfs/data") - -# HDFS High Availability properties -dfs_ha_enabled = False -dfs_ha_nameservices = default("/configurations/hdfs-site/dfs.nameservices", None) -dfs_ha_namenode_ids = default(format("hdfs-site/dfs.ha.namenodes.{dfs_ha_nameservices}"), None) -if dfs_ha_namenode_ids: - dfs_ha_namenode_ids_array_len = len(dfs_ha_namenode_ids.split(",")) - if dfs_ha_namenode_ids_array_len > 1: - dfs_ha_enabled = True -if dfs_ha_enabled: - for nn_id in dfs_ha_namenode_ids: - nn_host = config['configurations']['hdfs-site'][format('dfs.namenode.rpc-address.{dfs_ha_nameservices}.{nn_id}')] - if hostname in nn_host: - namenode_id = nn_id - namenode_id = None - -journalnode_address = default('/configurations/hdfs-site/dfs.journalnode.http-address', None) -if journalnode_address: - journalnode_port = journalnode_address.split(":")[1] - - - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/service_check.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/service_check.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/service_check.py deleted file mode 100644 index 88077a4..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/service_check.py +++ /dev/null @@ -1,106 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * - - -class HdfsServiceCheck(Script): - def service_check(self, env): - import params - - env.set_params(params) - unique = get_unique_id_and_date() - dir = '/tmp' - tmp_file = format("{dir}/{unique}") - - safemode_command = "dfsadmin -safemode get | grep OFF" - - create_dir_cmd = format("fs -mkdir {dir} ; hadoop fs -chmod -R 777 {dir}") - test_dir_exists = format("hadoop fs -test -e {dir}") - cleanup_cmd = format("fs -rm {tmp_file}") - #cleanup put below to handle retries; if retrying there wil be a stale file - #that needs cleanup; exit code is fn of second command - create_file_cmd = format( - "{cleanup_cmd}; hadoop fs -put /etc/passwd {tmp_file}") - test_cmd = format("fs -test -e {tmp_file}") - if params.security_enabled: - Execute(format( - "su - {smoke_user} -c '{kinit_path_local} -kt {smoke_user_keytab} " - "{smoke_user}'")) - ExecuteHadoop(safemode_command, - user=params.smoke_user, - logoutput=True, - conf_dir=params.hadoop_conf_dir, - try_sleep=3, - tries=5 - ) - ExecuteHadoop(create_dir_cmd, - user=params.smoke_user, - logoutput=True, - not_if=test_dir_exists, - conf_dir=params.hadoop_conf_dir, - try_sleep=3, - tries=5 - ) - ExecuteHadoop(create_file_cmd, - user=params.smoke_user, - logoutput=True, - conf_dir=params.hadoop_conf_dir, - try_sleep=3, - tries=5 - ) - ExecuteHadoop(test_cmd, - user=params.smoke_user, - logoutput=True, - conf_dir=params.hadoop_conf_dir, - try_sleep=3, - tries=5 - ) - if params.has_journalnode_hosts: - journalnode_port = params.journalnode_port - smoke_test_user = params.smoke_user - checkWebUIFileName = "checkWebUI.py" - checkWebUIFilePath = format("/tmp/{checkWebUIFileName}") - comma_sep_jn_hosts = ",".join(params.journalnode_hosts) - checkWebUICmd = format( - "su - {smoke_test_user} -c 'python {checkWebUIFilePath} -m " - "{comma_sep_jn_hosts} -p {journalnode_port}'") - File(checkWebUIFilePath, - content=StaticFile(checkWebUIFileName)) - - Execute(checkWebUICmd, - logoutput=True, - try_sleep=3, - tries=5 - ) - - if params.has_zkfc_hosts: - pid_dir = format("{hadoop_pid_dir_prefix}/{hdfs_user}") - pid_file = format("{pid_dir}/hadoop-{hdfs_user}-zkfc.pid") - check_zkfc_process_cmd = format( - "ls {pid_file} >/dev/null 2>&1 && ps `cat {pid_file}` >/dev/null 2>&1") - Execute(check_zkfc_process_cmd, - logoutput=True, - try_sleep=3, - tries=5 - ) - - -if __name__ == "__main__": - HdfsServiceCheck().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/snamenode.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/snamenode.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/snamenode.py deleted file mode 100644 index 8f682ec..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/snamenode.py +++ /dev/null @@ -1,64 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from hdfs_snamenode import snamenode - - -class SNameNode(Script): - def install(self, env): - import params - - env.set_params(params) - - self.install_packages(env) - - - def start(self, env): - import params - - env.set_params(params) - - self.config(env) - snamenode(action="start") - - def stop(self, env): - import params - - env.set_params(params) - - snamenode(action="stop") - - def config(self, env): - import params - - env.set_params(params) - - snamenode(action="configure") - - def status(self, env): - import status_params - - env.set_params(status_params) - - check_process_status(status_params.snamenode_pid_file) - - -if __name__ == "__main__": - SNameNode().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/status_params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/status_params.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/status_params.py deleted file mode 100644 index 4097373..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/status_params.py +++ /dev/null @@ -1,31 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * - -config = Script.get_config() - -hadoop_pid_dir_prefix = config['configurations']['global']['hadoop_pid_dir_prefix'] -hdfs_user = config['configurations']['global']['hdfs_user'] -hdp_pid_dir = format("{hadoop_pid_dir_prefix}/{hdfs_user}") -datanode_pid_file = format("{hdp_pid_dir}/hadoop-{hdfs_user}-datanode.pid") -namenode_pid_file = format("{hdp_pid_dir}/hadoop-{hdfs_user}-namenode.pid") -snamenode_pid_file = format("{hdp_pid_dir}/hadoop-{hdfs_user}-secondarynamenode.pid") -journalnode_pid_file = format("{hdp_pid_dir}/hadoop-{hdfs_user}-journalnode.pid") -zkfc_pid_file = format("{hdp_pid_dir}/hadoop-{hdfs_user}-zkfc.pid") http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/utils.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/utils.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/utils.py deleted file mode 100644 index cc878a8..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/utils.py +++ /dev/null @@ -1,138 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * - - -def service(action=None, name=None, user=None, create_pid_dir=False, - create_log_dir=False, keytab=None, principal=None): - import params - - kinit_cmd = "true" - pid_dir = format("{hadoop_pid_dir_prefix}/{user}") - pid_file = format("{pid_dir}/hadoop-{user}-{name}.pid") - log_dir = format("{hdfs_log_dir_prefix}/{user}") - hadoop_daemon = format( - "export HADOOP_LIBEXEC_DIR={hadoop_libexec_dir} && " - "{hadoop_bin}/hadoop-daemon.sh") - cmd = format("{hadoop_daemon} --config {hadoop_conf_dir}") - - if create_pid_dir: - Directory(pid_dir, - owner=user, - recursive=True) - if create_log_dir: - Directory(log_dir, - owner=user, - recursive=True) - - if params.security_enabled: - principal_replaced = principal.replace("_HOST", params.hostname) - kinit_cmd = format("kinit -kt {keytab} {principal_replaced}") - - if name == "datanode": - user = "root" - pid_file = format( - "{hadoop_pid_dir_prefix}/{hdfs_user}/hadoop-{hdfs_user}-{name}.pid") - - daemon_cmd = format("{cmd} {action} {name}") - - service_is_up = format( - "ls {pid_file} >/dev/null 2>&1 &&" - " ps `cat {pid_file}` >/dev/null 2>&1") if action == "start" else None - - Execute(kinit_cmd) - Execute(daemon_cmd, - user = user, - not_if=service_is_up - ) - if action == "stop": - File(pid_file, - action="delete", - ignore_failures=True - ) - - -def hdfs_directory(name=None, owner=None, group=None, - mode=None, recursive_chown=False, recursive_chmod=False): - import params - - dir_exists = format("hadoop fs -ls {name} >/dev/null 2>&1") - namenode_safe_mode_off = "hadoop dfsadmin -safemode get|grep 'Safe mode is OFF'" - - stub_dir = params.namenode_dirs_created_stub_dir - stub_filename = params.namenode_dirs_stub_filename - dir_absent_in_stub = format( - "grep -q '^{name}$' {stub_dir}/{stub_filename} > /dev/null 2>&1; test $? -ne 0") - record_dir_in_stub = format("echo '{name}' >> {stub_dir}/{stub_filename}") - tries = 3 - try_sleep = 10 - dfs_check_nn_status_cmd = "true" - - if params.dfs_ha_enabled: - namenode_id = params.namenode_id - dfs_check_nn_status_cmd = format( - "hdfs haadmin -getServiceState $namenode_id | grep active > /dev/null") - - #if params.stack_version[0] == "2": - mkdir_cmd = format("fs -mkdir -p {name}") - #else: - # mkdir_cmd = format("fs -mkdir {name}") - - if params.security_enabled: - Execute(format("kinit -kt {hdfs_user_keytab} {hdfs_user}"), - user = params.hdfs_user) - ExecuteHadoop(mkdir_cmd, - try_sleep=try_sleep, - tries=tries, - not_if=format( - "{dir_absent_in_stub} && {dfs_check_nn_status_cmd} && " - "{dir_exists} && ! {namenode_safe_mode_off}"), - only_if=format( - "{dir_absent_in_stub} && {dfs_check_nn_status_cmd} && " - "! {dir_exists}"), - conf_dir=params.hadoop_conf_dir, - user=params.hdfs_user - ) - Execute(record_dir_in_stub, - user=params.hdfs_user, - only_if=format("! {dir_absent_in_stub}") - ) - - recursive = "-R" if recursive_chown else "" - perm_cmds = [] - - if owner: - chown = owner - if group: - chown = format("{owner}:{group}") - perm_cmds.append(format("fs -chown {recursive} {chown} {name}")) - if mode: - perm_cmds.append(format("fs -chmod {recursive} {mode} {name}")) - for cmd in perm_cmds: - ExecuteHadoop(cmd, - user=params.hdfs_user, - only_if=format("{dir_absent_in_stub} && {dfs_check_nn_status_cmd} && {namenode_safe_mode_off} && {dir_exists}"), - try_sleep=try_sleep, - tries=tries, - conf_dir=params.hadoop_conf_dir - ) - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/zkfc_slave.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/zkfc_slave.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/zkfc_slave.py deleted file mode 100644 index 1f9ba65..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HDFS/package/scripts/zkfc_slave.py +++ /dev/null @@ -1,62 +0,0 @@ -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -from utils import service - - -class ZkfcSlave(Script): - def install(self, env): - import params - - self.install_packages(env) - env.set_params(params) - - def start(self, env): - import params - - env.set_params(params) - self.config(env) - service( - action="start", name="zkfc", user=params.hdfs_user, create_pid_dir=True, - create_log_dir=True - ) - - def stop(self, env): - import params - - env.set_params(params) - service( - action="stop", name="zkfc", user=params.hdfs_user, create_pid_dir=True, - create_log_dir=True - ) - - def config(self, env): - pass - - def status(self, env): - import status_params - - env.set_params(status_params) - - check_process_status(status_params.zkfc_pid_file) - - -if __name__ == "__main__": - ZkfcSlave().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/configuration/hive-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/configuration/hive-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/configuration/hive-site.xml deleted file mode 100644 index b3ed5f5..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/configuration/hive-site.xml +++ /dev/null @@ -1,267 +0,0 @@ - - - - - - - - ambari.hive.db.schema.name - hive - Database name used as the Hive Metastore - - - - javax.jdo.option.ConnectionURL - jdbc - JDBC connect string for a JDBC metastore - - - - javax.jdo.option.ConnectionDriverName - com.mysql.jdbc.Driver - Driver class name for a JDBC metastore - - - - javax.jdo.option.ConnectionUserName - hive - username to use against metastore database - - - - javax.jdo.option.ConnectionPassword - - password to use against metastore database - - - - hive.metastore.warehouse.dir - /apps/hive/warehouse - location of default database for the warehouse - - - - hive.metastore.sasl.enabled - - If true, the metastore thrift interface will be secured with SASL. - Clients must authenticate with Kerberos. - - - - hive.metastore.kerberos.keytab.file - - The path to the Kerberos Keytab file containing the metastore - thrift server's service principal. - - - - hive.metastore.kerberos.principal - - The service principal for the metastore thrift server. The special - string _HOST will be replaced automatically with the correct host name. - - - - hive.metastore.cache.pinobjtypes - Table,Database,Type,FieldSchema,Order - List of comma separated metastore object types that should be pinned in the cache - - - - hive.metastore.uris - thrift://localhost:9083 - URI for client to contact metastore server - - - - hive.semantic.analyzer.factory.impl - org.apache.hivealog.cli.HCatSemanticAnalyzerFactory - controls which SemanticAnalyzerFactory implemenation class is used by CLI - - - - hive.metastore.client.socket.timeout - 60 - MetaStore Client socket timeout in seconds - - - - hive.metastore.execute.setugi - true - In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. - - - - hive.security.authorization.enabled - false - enable or disable the hive client authorization - - - - hive.security.authorization.manager - org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider - the hive client authorization manager class name. - The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. - - - - hive.security.metastore.authorization.manager - org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider - The authorization manager class name to be used in the metastore for authorization. The user-defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. - - - - hive.security.authenticator.manager - org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator - Hive client authenticator manager class name. The user-defined authenticator class should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. - - - - hive.server2.enable.doAs - true - - - - fs.hdfs.impl.disable.cache - true - - - - fs.file.impl.disable.cache - true - - - - hive.enforce.bucketing - true - Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced. - - - - hive.enforce.sorting - true - Whether sorting is enforced. If true, while inserting into the table, sorting is enforced. - - - - hive.map.aggr - true - Whether to use map-side aggregation in Hive Group By queries. - - - - hive.optimize.bucketmapjoin - true - - - - hive.optimize.bucketmapjoin.sortedmerge - true - - - - hive.mapred.reduce.tasks.speculative.execution - false - Whether speculative execution for reducers should be turned on. - - - - hive.auto.convert.join - true - Whether Hive enable the optimization about converting common - join into mapjoin based on the input file size. - - - - hive.auto.convert.sortmerge.join - true - Will the join be automatically converted to a sort-merge join, if the joined tables pass - the criteria for sort-merge join. - - - - - hive.auto.convert.sortmerge.join.noconditionaltask - true - - - - hive.auto.convert.join.noconditionaltask - true - Whether Hive enable the optimization about converting common join into mapjoin based on the input file - size. If this paramater is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the - specified size, the join is directly converted to a mapjoin (there is no conditional task). - - - - - hive.auto.convert.join.noconditionaltask.size - 1000000000 - If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. However, if it - is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly - converted to a mapjoin(there is no conditional task). The default is 10MB. - - - - - hive.optimize.reducededuplication.min.reducer - 1 - Reduce deduplication merges two RSs by moving key/parts/reducer-num of the child RS to parent RS. - That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR. - The optimization will be disabled if number of reducers is less than specified value. - - - - - hive.optimize.mapjoin.mapreduce - true - If hive.auto.convert.join is off, this parameter does not take - affect. If it is on, and if there are map-join jobs followed by a map-reduce - job (for e.g a group by), each map-only job is merged with the following - map-reduce job. - - - - - hive.mapjoin.bucket.cache.size - 10000 - - Size per reducer.The default is 1G, i.e if the input size is 10G, it - will use 10 reducers. - - - - - hive.vectorized.execution.enabled - false - - - - hive.optimize.reducededuplication - true - - - - hive.optimize.index.filter - true - - Whether to enable automatic use of indexes - - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/metainfo.xml deleted file mode 100644 index 2384fe9..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/metainfo.xml +++ /dev/null @@ -1,156 +0,0 @@ - - - - 2.0 - - - HIVE - Data warehouse system for ad-hoc queries & analysis of large datasets and table & storage management service - 0.12.0.2.0.6.0 - - - - HIVE_METASTORE - MASTER - - - PYTHON - 600 - - - - - HIVE_SERVER - MASTER - - - PYTHON - - - - - MYSQL_SERVER - MASTER - - - PYTHON - - - - - HIVE_CLIENT - CLIENT - - - PYTHON - - - - - - - any - - - rpm - hive - - - rpm - mysql-connector-java - - - rpm - mysql - - - - - centos6 - - - rpm - mysql-server - - - - - centos5 - - - rpm - mysql-server - - - - - suse - - - rpm - mysql-client - - - - - - - - PYTHON - 300 - - - - hive-site - - - - - HCATALOG - This is comment for HCATALOG service - 0.12.0.2.0.6.0 - - - HCAT - CLIENT - - - PYTHON - - - - - - any - - - rpm - hcatalog - - - - - - - PYTHON - 300 - - - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/addMysqlUser.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/addMysqlUser.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/addMysqlUser.sh deleted file mode 100644 index 8d31b91..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/addMysqlUser.sh +++ /dev/null @@ -1,41 +0,0 @@ -#!/bin/sh -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# - -mysqldservice=$1 -mysqldbuser=$2 -mysqldbpasswd=$3 -mysqldbhost=$4 -myhostname=$(hostname -f) - -service $mysqldservice start -echo "Adding user $mysqldbuser@$mysqldbhost and $mysqldbuser@localhost" -mysql -u root -e "CREATE USER '$mysqldbuser'@'$mysqldbhost' IDENTIFIED BY '$mysqldbpasswd';" -mysql -u root -e "CREATE USER '$mysqldbuser'@'localhost' IDENTIFIED BY '$mysqldbpasswd';" -mysql -u root -e "GRANT ALL PRIVILEGES ON *.* TO '$mysqldbuser'@'$mysqldbhost';" -mysql -u root -e "GRANT ALL PRIVILEGES ON *.* TO '$mysqldbuser'@'localhost';" -if [ '$(mysql -u root -e "select user from mysql.user where user='$mysqldbuser' and host='$myhostname'" | grep "$mysqldbuser")' != '0' ]; then - echo "Adding user $mysqldbuser@$myhostname"; - mysql -u root -e "CREATE USER '$mysqldbuser'@'$myhostname' IDENTIFIED BY '$mysqldbpasswd';"; - mysql -u root -e "GRANT ALL PRIVILEGES ON *.* TO '$mysqldbuser'@'$myhostname';"; -fi -mysql -u root -e "flush privileges;" -service $mysqldservice stop http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hcatSmoke.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hcatSmoke.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hcatSmoke.sh deleted file mode 100644 index 9e7b33f..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hcatSmoke.sh +++ /dev/null @@ -1,35 +0,0 @@ -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# -export tablename=$1 - -case "$2" in - -prepare) - hcat -e "show tables" - hcat -e "drop table IF EXISTS ${tablename}" - hcat -e "create table ${tablename} ( id INT, name string ) stored as rcfile ;" -;; - -cleanup) - hcat -e "drop table IF EXISTS ${tablename}" -;; - -esac http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveSmoke.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveSmoke.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveSmoke.sh deleted file mode 100644 index 7e03524..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveSmoke.sh +++ /dev/null @@ -1,23 +0,0 @@ -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# -export tablename=$1 -echo "CREATE EXTERNAL TABLE IF NOT EXISTS ${tablename} ( foo INT, bar STRING );" | hive -echo "DESCRIBE ${tablename};" | hive http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2.sql ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2.sql b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2.sql deleted file mode 100644 index 99a3865..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2.sql +++ /dev/null @@ -1,23 +0,0 @@ -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# - -CREATE EXTERNAL TABLE IF NOT EXISTS hiveserver2smoke20408 ( foo INT, bar STRING ); -DESCRIBE hiveserver2smoke20408; http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2Smoke.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2Smoke.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2Smoke.sh deleted file mode 100644 index 051a21e..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/hiveserver2Smoke.sh +++ /dev/null @@ -1,31 +0,0 @@ -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# - -smokeout=`/usr/lib/hive/bin/beeline -u $1 -n fakeuser -p fakepwd -d org.apache.hive.jdbc.HiveDriver -e '!run $2' 2>&1| awk '{print}'|grep Error` - -if [ "x$smokeout" == "x" ]; then - echo "Smoke test of hiveserver2 passed" - exit 0 -else - echo "Smoke test of hiveserver2 wasnt passed" - echo $smokeout - exit 1 -fi http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/pigSmoke.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/pigSmoke.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/pigSmoke.sh deleted file mode 100644 index 2e90ac0..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/pigSmoke.sh +++ /dev/null @@ -1,18 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License - -A = load 'passwd' using PigStorage(':'); -B = foreach A generate \$0 as id; -store B into 'pigsmoke.out'; http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startHiveserver2.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startHiveserver2.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startHiveserver2.sh deleted file mode 100644 index fa90c2f..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startHiveserver2.sh +++ /dev/null @@ -1,22 +0,0 @@ -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# -HIVE_CONF_DIR=$4 /usr/lib/hive/bin/hiveserver2 -hiveconf hive.metastore.uris=' ' > $1 2> $2 & -echo $!|cat>$3 http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startMetastore.sh ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startMetastore.sh b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startMetastore.sh deleted file mode 100644 index 9350776..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/files/startMetastore.sh +++ /dev/null @@ -1,22 +0,0 @@ -# -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. -# -# -HIVE_CONF_DIR=$4 hive --service metastore > $1 2> $2 & -echo $!|cat>$3 http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/__init__.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/__init__.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/__init__.py deleted file mode 100644 index 5561e10..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -#!/usr/bin/env python -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat.py deleted file mode 100644 index 2993d3a..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat.py +++ /dev/null @@ -1,47 +0,0 @@ -#!/usr/bin/env python -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * -import sys - - -def hcat(): - import params - - Directory(params.hcat_conf_dir, - owner=params.hcat_user, - group=params.user_group, - ) - - Directory(params.hcat_pid_dir, - owner=params.webhcat_user, - recursive=True - ) - - hcat_TemplateConfig('hcat-env.sh') - - -def hcat_TemplateConfig(name): - import params - - TemplateConfig(format("{hcat_conf_dir}/{name}"), - owner=params.hcat_user, - group=params.user_group - ) http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_client.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_client.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_client.py deleted file mode 100644 index 8b5921a..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_client.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -import sys -from resource_management import * -from hcat import hcat - -class HCatClient(Script): - def install(self, env): - self.install_packages(env) - self.configure(env) - - def configure(self, env): - import params - - env.set_params(params) - - hcat() - - - def status(self, env): - raise ClientComponentHasNoStatus() - - -if __name__ == "__main__": - HCatClient().execute() http://git-wip-us.apache.org/repos/asf/ambari/blob/ef81b392/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_service_check.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_service_check.py b/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_service_check.py deleted file mode 100644 index 5112e99..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.0._/services/HIVE/package/scripts/hcat_service_check.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env python -""" -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. - -""" - -from resource_management import * - -def hcat_service_check(): - import params - - unique = get_unique_id_and_date() - output_file = format("/apps/hive/warehouse/hcatsmoke{unique}") - test_cmd = format("fs -test -e {output_file}") - - if params.security_enabled: - kinit_cmd = format( - "{kinit_path_local} -kt {smoke_user_keytab} {smokeuser}; ") - else: - kinit_cmd = "" - - File('/tmp/hcatSmoke.sh', - content=StaticFile("hcatSmoke.sh"), - mode=0755 - ) - - prepare_cmd = format("{kinit_cmd}sh /tmp/hcatSmoke.sh hcatsmoke{unique} prepare") - - Execute(prepare_cmd, - tries=3, - user=params.smokeuser, - try_sleep=5, - path=['/usr/sbin', '/usr/local/nin', '/bin', '/usr/bin'], - logoutput=True) - - ExecuteHadoop(test_cmd, - user=params.hdfs_user, - logoutput=True, - conf_dir=params.hadoop_conf_dir) - - cleanup_cmd = format("{kinit_cmd}sh /tmp/hcatSmoke.sh hcatsmoke{unique} cleanup") - - Execute(cleanup_cmd, - tries=3, - user=params.smokeuser, - try_sleep=5, - path=['/usr/sbin', '/usr/local/nin', '/bin', '/usr/bin'], - logoutput=True - )