Return-Path: X-Original-To: apmail-ambari-commits-archive@www.apache.org Delivered-To: apmail-ambari-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 966B210410 for ; Fri, 17 Jan 2014 19:49:29 +0000 (UTC) Received: (qmail 6698 invoked by uid 500); 17 Jan 2014 19:49:24 -0000 Delivered-To: apmail-ambari-commits-archive@ambari.apache.org Received: (qmail 6552 invoked by uid 500); 17 Jan 2014 19:49:20 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 6421 invoked by uid 99); 17 Jan 2014 19:49:17 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jan 2014 19:49:17 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id A6F0D1BF9B; Fri, 17 Jan 2014 19:49:16 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: mahadev@apache.org To: commits@ambari.apache.org Date: Fri, 17 Jan 2014 19:49:27 -0000 Message-Id: <7d01a739fe0448b18065d34384b45f50@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [12/12] git commit: AMBARI-4336. Move 1.3.4 stack to 1.3.3 using the python libraries. (mahadev) AMBARI-4336. Move 1.3.4 stack to 1.3.3 using the python libraries. (mahadev) Project: http://git-wip-us.apache.org/repos/asf/ambari/repo Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/92583535 Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/92583535 Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/92583535 Branch: refs/heads/trunk Commit: 92583535dc8ad24c49f9d7f4c6c9c8b56575c497 Parents: 186d6a7 Author: Mahadev Konar Authored: Fri Jan 17 11:31:56 2014 -0800 Committer: Mahadev Konar Committed: Fri Jan 17 11:31:56 2014 -0800 ---------------------------------------------------------------------- .../HDP/1.3.3/services/FLUME/metainfo.xml | 1 + .../HDP/1.3.3/services/GANGLIA/metainfo.xml | 102 +- .../HDP/1.3.3/services/HBASE/metainfo.xml | 113 +- .../services/HCATALOG/configuration/global.xml | 45 - .../HDP/1.3.3/services/HCATALOG/metainfo.xml | 30 - .../services/HDFS/configuration/hdfs-site.xml | 372 ++--- .../stacks/HDP/1.3.3/services/HDFS/metainfo.xml | 134 +- .../services/HIVE/configuration/global.xml | 23 + .../services/HIVE/configuration/hive-site.xml | 8 +- .../stacks/HDP/1.3.3/services/HIVE/metainfo.xml | 175 ++- .../stacks/HDP/1.3.3/services/HUE/metainfo.xml | 1 + .../MAPREDUCE/configuration/mapred-site.xml | 552 ++++--- .../HDP/1.3.3/services/MAPREDUCE/metainfo.xml | 92 +- .../HDP/1.3.3/services/NAGIOS/metainfo.xml | 94 +- .../HDP/1.3.3/services/OOZIE/metainfo.xml | 103 +- .../stacks/HDP/1.3.3/services/PIG/metainfo.xml | 47 +- .../HDP/1.3.3/services/SQOOP/metainfo.xml | 63 +- .../HDP/1.3.3/services/WEBHCAT/metainfo.xml | 91 +- .../HDP/1.3.3/services/ZOOKEEPER/metainfo.xml | 57 +- .../before-INSTALL/files/changeToSecureUid.sh | 50 - .../1.3.4/hooks/before-INSTALL/scripts/hook.py | 36 - .../hooks/before-INSTALL/scripts/params.py | 81 -- .../scripts/shared_initialization.py | 107 -- .../hooks/before-START/files/checkForFormat.sh | 62 - .../before-START/files/task-log4j.properties | 132 -- .../1.3.4/hooks/before-START/scripts/hook.py | 37 - .../1.3.4/hooks/before-START/scripts/params.py | 172 --- .../scripts/shared_initialization.py | 322 ----- .../templates/commons-logging.properties.j2 | 25 - .../templates/exclude_hosts_list.j2 | 3 - .../before-START/templates/hadoop-env.sh.j2 | 121 -- .../templates/hadoop-metrics2.properties.j2 | 45 - .../hooks/before-START/templates/hdfs.conf.j2 | 17 - .../before-START/templates/health_check-v2.j2 | 91 -- .../before-START/templates/health_check.j2 | 118 -- .../templates/include_hosts_list.j2 | 3 - .../before-START/templates/log4j.properties.j2 | 200 --- .../hooks/before-START/templates/slaves.j2 | 3 - .../hooks/before-START/templates/snmpd.conf.j2 | 48 - .../templates/taskcontroller.cfg.j2 | 20 - .../resources/stacks/HDP/1.3.4/metainfo.xml | 22 - .../stacks/HDP/1.3.4/repos/repoinfo.xml | 75 - .../services/FLUME/configuration/global.xml | 24 - .../HDP/1.3.4/services/FLUME/metainfo.xml | 31 - .../services/GANGLIA/configuration/global.xml | 55 - .../HDP/1.3.4/services/GANGLIA/metainfo.xml | 106 -- .../GANGLIA/package/files/checkGmetad.sh | 37 - .../GANGLIA/package/files/checkGmond.sh | 62 - .../GANGLIA/package/files/checkRrdcached.sh | 34 - .../services/GANGLIA/package/files/gmetad.init | 73 - .../services/GANGLIA/package/files/gmetadLib.sh | 204 --- .../services/GANGLIA/package/files/gmond.init | 73 - .../services/GANGLIA/package/files/gmondLib.sh | 545 ------- .../1.3.4/services/GANGLIA/package/files/rrd.py | 213 --- .../GANGLIA/package/files/rrdcachedLib.sh | 47 - .../GANGLIA/package/files/setupGanglia.sh | 141 -- .../GANGLIA/package/files/startGmetad.sh | 64 - .../GANGLIA/package/files/startGmond.sh | 80 -- .../GANGLIA/package/files/startRrdcached.sh | 69 - .../GANGLIA/package/files/stopGmetad.sh | 43 - .../services/GANGLIA/package/files/stopGmond.sh | 54 - .../GANGLIA/package/files/stopRrdcached.sh | 41 - .../GANGLIA/package/files/teardownGanglia.sh | 28 - .../services/GANGLIA/package/scripts/ganglia.py | 106 -- .../GANGLIA/package/scripts/ganglia_monitor.py | 163 --- .../package/scripts/ganglia_monitor_service.py | 31 - .../GANGLIA/package/scripts/ganglia_server.py | 181 --- .../package/scripts/ganglia_server_service.py | 27 - .../services/GANGLIA/package/scripts/params.py | 74 - .../GANGLIA/package/scripts/status_params.py | 25 - .../package/templates/gangliaClusters.conf.j2 | 34 - .../GANGLIA/package/templates/gangliaEnv.sh.j2 | 24 - .../GANGLIA/package/templates/gangliaLib.sh.j2 | 62 - .../services/HBASE/configuration/global.xml | 160 --- .../HBASE/configuration/hbase-policy.xml | 53 - .../services/HBASE/configuration/hbase-site.xml | 367 ----- .../HDP/1.3.4/services/HBASE/metainfo.xml | 123 -- .../HBASE/package/files/hbaseSmokeVerify.sh | 32 - .../services/HBASE/package/scripts/__init__.py | 19 - .../services/HBASE/package/scripts/functions.py | 67 - .../services/HBASE/package/scripts/hbase.py | 91 -- .../HBASE/package/scripts/hbase_client.py | 52 - .../HBASE/package/scripts/hbase_master.py | 74 - .../HBASE/package/scripts/hbase_regionserver.py | 75 - .../HBASE/package/scripts/hbase_service.py | 46 - .../services/HBASE/package/scripts/params.py | 84 -- .../HBASE/package/scripts/service_check.py | 89 -- .../HBASE/package/scripts/status_params.py | 25 - .../hadoop-metrics.properties-GANGLIA-MASTER.j2 | 50 - .../hadoop-metrics.properties-GANGLIA-RS.j2 | 50 - .../templates/hadoop-metrics.properties.j2 | 50 - .../HBASE/package/templates/hbase-env.sh.j2 | 82 -- .../HBASE/package/templates/hbase-smoke.sh.j2 | 26 - .../package/templates/hbase_client_jaas.conf.j2 | 23 - .../templates/hbase_grant_permissions.j2 | 21 - .../package/templates/hbase_master_jaas.conf.j2 | 25 - .../templates/hbase_regionserver_jaas.conf.j2 | 25 - .../HBASE/package/templates/regionservers.j2 | 2 - .../services/HDFS/configuration/core-site.xml | 253 ---- .../services/HDFS/configuration/global.xml | 187 --- .../HDFS/configuration/hadoop-policy.xml | 134 -- .../services/HDFS/configuration/hdfs-site.xml | 476 ------ .../stacks/HDP/1.3.4/services/HDFS/metainfo.xml | 146 -- .../HDFS/package/files/checkForFormat.sh | 62 - .../services/HDFS/package/files/checkWebUI.py | 53 - .../services/HDFS/package/scripts/datanode.py | 57 - .../HDFS/package/scripts/hdfs_client.py | 52 - .../HDFS/package/scripts/hdfs_datanode.py | 59 - .../HDFS/package/scripts/hdfs_namenode.py | 192 --- .../HDFS/package/scripts/hdfs_snamenode.py | 53 - .../services/HDFS/package/scripts/namenode.py | 66 - .../services/HDFS/package/scripts/params.py | 165 --- .../HDFS/package/scripts/service_check.py | 106 -- .../services/HDFS/package/scripts/snamenode.py | 64 - .../HDFS/package/scripts/status_params.py | 31 - .../services/HDFS/package/scripts/utils.py | 133 -- .../package/templates/exclude_hosts_list.j2 | 3 - .../services/HIVE/configuration/global.xml | 148 -- .../services/HIVE/configuration/hive-site.xml | 236 --- .../stacks/HDP/1.3.4/services/HIVE/metainfo.xml | 186 --- .../services/HIVE/package/files/addMysqlUser.sh | 41 - .../services/HIVE/package/files/hcatSmoke.sh | 35 - .../services/HIVE/package/files/hiveSmoke.sh | 23 - .../services/HIVE/package/files/hiveserver2.sql | 23 - .../HIVE/package/files/hiveserver2Smoke.sh | 31 - .../services/HIVE/package/files/pigSmoke.sh | 18 - .../HIVE/package/files/startHiveserver2.sh | 22 - .../HIVE/package/files/startMetastore.sh | 22 - .../services/HIVE/package/scripts/__init__.py | 19 - .../1.3.4/services/HIVE/package/scripts/hcat.py | 47 - .../HIVE/package/scripts/hcat_client.py | 41 - .../HIVE/package/scripts/hcat_service_check.py | 63 - .../1.3.4/services/HIVE/package/scripts/hive.py | 122 -- .../HIVE/package/scripts/hive_client.py | 41 - .../HIVE/package/scripts/hive_metastore.py | 63 - .../HIVE/package/scripts/hive_server.py | 63 - .../HIVE/package/scripts/hive_service.py | 56 - .../HIVE/package/scripts/mysql_server.py | 77 - .../HIVE/package/scripts/mysql_service.py | 38 - .../services/HIVE/package/scripts/params.py | 123 -- .../HIVE/package/scripts/service_check.py | 56 - .../HIVE/package/scripts/status_params.py | 30 - .../HIVE/package/templates/hcat-env.sh.j2 | 25 - .../HIVE/package/templates/hive-env.sh.j2 | 55 - .../1.3.4/services/HUE/configuration/global.xml | 35 - .../services/HUE/configuration/hue-site.xml | 290 ---- .../stacks/HDP/1.3.4/services/HUE/metainfo.xml | 32 - .../configuration/capacity-scheduler.xml | 195 --- .../MAPREDUCE/configuration/core-site.xml | 20 - .../services/MAPREDUCE/configuration/global.xml | 160 --- .../configuration/mapred-queue-acls.xml | 39 - .../MAPREDUCE/configuration/mapred-site.xml | 601 -------- .../HDP/1.3.4/services/MAPREDUCE/metainfo.xml | 102 -- .../MAPREDUCE/package/scripts/client.py | 43 - .../MAPREDUCE/package/scripts/historyserver.py | 59 - .../MAPREDUCE/package/scripts/jobtracker.py | 104 -- .../MAPREDUCE/package/scripts/mapreduce.py | 50 - .../MAPREDUCE/package/scripts/params.py | 54 - .../MAPREDUCE/package/scripts/service.py | 56 - .../MAPREDUCE/package/scripts/service_check.py | 89 -- .../MAPREDUCE/package/scripts/status_params.py | 33 - .../MAPREDUCE/package/scripts/tasktracker.py | 104 -- .../package/templates/exclude_hosts_list.j2 | 3 - .../services/NAGIOS/configuration/global.xml | 50 - .../HDP/1.3.4/services/NAGIOS/metainfo.xml | 106 -- .../NAGIOS/package/files/check_aggregate.php | 243 ---- .../services/NAGIOS/package/files/check_cpu.pl | 114 -- .../package/files/check_datanode_storage.php | 100 -- .../NAGIOS/package/files/check_hdfs_blocks.php | 115 -- .../package/files/check_hdfs_capacity.php | 109 -- .../files/check_hive_metastore_status.sh | 45 - .../NAGIOS/package/files/check_hue_status.sh | 31 - .../files/check_mapred_local_dir_used.sh | 34 - .../package/files/check_name_dir_status.php | 93 -- .../NAGIOS/package/files/check_namenodes_ha.sh | 82 -- .../package/files/check_nodemanager_health.sh | 44 - .../NAGIOS/package/files/check_oozie_status.sh | 45 - .../NAGIOS/package/files/check_rpcq_latency.php | 104 -- .../package/files/check_templeton_status.sh | 45 - .../NAGIOS/package/files/check_webui.sh | 87 -- .../NAGIOS/package/files/hdp_nagios_init.php | 81 -- .../NAGIOS/package/scripts/functions.py | 31 - .../services/NAGIOS/package/scripts/nagios.py | 97 -- .../NAGIOS/package/scripts/nagios_server.py | 87 -- .../package/scripts/nagios_server_config.py | 91 -- .../NAGIOS/package/scripts/nagios_service.py | 36 - .../services/NAGIOS/package/scripts/params.py | 168 --- .../NAGIOS/package/scripts/status_params.py | 26 - .../NAGIOS/package/templates/contacts.cfg.j2 | 91 -- .../package/templates/hadoop-commands.cfg.j2 | 114 -- .../package/templates/hadoop-hostgroups.cfg.j2 | 33 - .../package/templates/hadoop-hosts.cfg.j2 | 34 - .../templates/hadoop-servicegroups.cfg.j2 | 98 -- .../package/templates/hadoop-services.cfg.j2 | 714 --------- .../NAGIOS/package/templates/nagios.cfg.j2 | 1349 ------------------ .../NAGIOS/package/templates/nagios.conf.j2 | 62 - .../services/NAGIOS/package/templates/nagios.j2 | 146 -- .../NAGIOS/package/templates/resource.cfg.j2 | 51 - .../services/OOZIE/configuration/global.xml | 105 -- .../services/OOZIE/configuration/oozie-site.xml | 237 --- .../HDP/1.3.4/services/OOZIE/metainfo.xml | 113 -- .../services/OOZIE/package/files/oozieSmoke.sh | 93 -- .../OOZIE/package/files/wrap_ooziedb.sh | 31 - .../services/OOZIE/package/scripts/oozie.py | 99 -- .../OOZIE/package/scripts/oozie_client.py | 53 - .../OOZIE/package/scripts/oozie_server.py | 65 - .../OOZIE/package/scripts/oozie_service.py | 45 - .../services/OOZIE/package/scripts/params.py | 64 - .../OOZIE/package/scripts/service_check.py | 47 - .../OOZIE/package/scripts/status_params.py | 26 - .../OOZIE/package/templates/oozie-env.sh.j2 | 64 - .../package/templates/oozie-log4j.properties.j2 | 74 - .../services/PIG/configuration/pig.properties | 52 - .../stacks/HDP/1.3.4/services/PIG/metainfo.xml | 61 - .../services/PIG/package/files/pigSmoke.sh | 18 - .../services/PIG/package/scripts/params.py | 36 - .../1.3.4/services/PIG/package/scripts/pig.py | 46 - .../services/PIG/package/scripts/pig_client.py | 52 - .../PIG/package/scripts/service_check.py | 75 - .../PIG/package/templates/log4j.properties.j2 | 30 - .../PIG/package/templates/pig-env.sh.j2 | 17 - .../PIG/package/templates/pig.properties.j2 | 55 - .../HDP/1.3.4/services/SQOOP/metainfo.xml | 77 - .../services/SQOOP/package/scripts/__init__.py | 18 - .../services/SQOOP/package/scripts/params.py | 36 - .../SQOOP/package/scripts/service_check.py | 36 - .../services/SQOOP/package/scripts/sqoop.py | 51 - .../SQOOP/package/scripts/sqoop_client.py | 40 - .../SQOOP/package/templates/sqoop-env.sh.j2 | 36 - .../WEBHCAT/configuration/webhcat-site.xml | 126 -- .../HDP/1.3.4/services/WEBHCAT/metainfo.xml | 97 -- .../WEBHCAT/package/files/templetonSmoke.sh | 95 -- .../WEBHCAT/package/scripts/__init__.py | 21 - .../services/WEBHCAT/package/scripts/params.py | 51 - .../WEBHCAT/package/scripts/service_check.py | 45 - .../WEBHCAT/package/scripts/status_params.py | 26 - .../services/WEBHCAT/package/scripts/webhcat.py | 120 -- .../WEBHCAT/package/scripts/webhcat_server.py | 54 - .../WEBHCAT/package/scripts/webhcat_service.py | 41 - .../WEBHCAT/package/templates/webhcat-env.sh.j2 | 44 - .../services/ZOOKEEPER/configuration/global.xml | 75 - .../HDP/1.3.4/services/ZOOKEEPER/metainfo.xml | 72 - .../services/ZOOKEEPER/package/files/zkEnv.sh | 96 -- .../ZOOKEEPER/package/files/zkServer.sh | 120 -- .../ZOOKEEPER/package/files/zkService.sh | 26 - .../services/ZOOKEEPER/package/files/zkSmoke.sh | 78 - .../ZOOKEEPER/package/scripts/__init__.py | 21 - .../ZOOKEEPER/package/scripts/params.py | 71 - .../ZOOKEEPER/package/scripts/service_check.py | 47 - .../ZOOKEEPER/package/scripts/status_params.py | 26 - .../ZOOKEEPER/package/scripts/zookeeper.py | 92 -- .../package/scripts/zookeeper_client.py | 43 - .../package/scripts/zookeeper_server.py | 55 - .../package/scripts/zookeeper_service.py | 43 - .../package/templates/configuration.xsl.j2 | 37 - .../package/templates/log4j.properties.j2 | 71 - .../ZOOKEEPER/package/templates/zoo.cfg.j2 | 51 - .../package/templates/zookeeper-env.sh.j2 | 25 - .../templates/zookeeper_client_jaas.conf.j2 | 22 - .../package/templates/zookeeper_jaas.conf.j2 | 25 - 260 files changed, 1403 insertions(+), 21441 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/FLUME/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/FLUME/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/FLUME/metainfo.xml index 13eba83..bebb54e 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/FLUME/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/FLUME/metainfo.xml @@ -24,6 +24,7 @@ FLUME_SERVER MASTER + 1 http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/metainfo.xml index 1a895b8..09d78a6 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/GANGLIA/metainfo.xml @@ -16,29 +16,91 @@ limitations under the License. --> - root - Ganglia Metrics Collection system - 3.5.0 - - - - GANGLIA_SERVER - MASTER - - + 2.0 + + + GANGLIA + Ganglia Metrics Collection system + 3.5.0 + - GANGLIA_MONITOR - SLAVE + GANGLIA_SERVER + MASTER + 1 + + + PYTHON + 600 + - MONITOR_WEBSERVER - MASTER + GANGLIA_MONITOR + SLAVE + ALL + + true + + + + PYTHON + 600 + - - - - global - - + + + + any + + + rpm + libganglia-3.5.0-99 + + + rpm + ganglia-devel-3.5.0-99 + + + rpm + ganglia-gmetad-3.5.0-99 + + + rpm + ganglia-web-3.5.7-99.noarch + + + rpm + python-rrdtool.x86_64 + + + rpm + ganglia-gmond-3.5.0-99 + + + rpm + ganglia-gmond-modules-python-3.5.0-99 + + + + + suse + + rpm + apache2 + + + rpm + apache2-mod_php5 + + + + centos6 + + rpm + httpd + + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/metainfo.xml index 6643782..4c610db 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HBASE/metainfo.xml @@ -16,29 +16,108 @@ limitations under the License. --> - mapred - Non-relational distributed database and centralized service for configuration management & synchronization - 0.94.6.1.3.3.0 - - + 2.0 + + + HBASE + Non-relational distributed database and centralized service for configuration management & + synchronization + + 0.94.6.1.3.3.0 + - HBASE_MASTER - MASTER + HBASE_MASTER + MASTER + 1 + + + HDFS/HDFS_CLIENT + host + + true + + + + ZOOKEEPER/ZOOKEEPER_SERVER + cluster + + true + HBASE/HBASE_MASTER + + + + + + PYTHON + 600 + + + + DECOMMISSION + + + PYTHON + 600 + + + - HBASE_REGIONSERVER - SLAVE + HBASE_REGIONSERVER + SLAVE + 1+ + + + PYTHON + + + + DECOMMISSION + + + PYTHON + 600 + + + - HBASE_CLIENT - CLIENT + HBASE_CLIENT + CLIENT + 0+ + + + PYTHON + - - - global - hbase-site - hbase-policy - + + + + + centos6 + + + rpm + hbase + + + + + + + + PYTHON + 300 + + + + global + hbase-policy + hbase-site + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/configuration/global.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/configuration/global.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/configuration/global.xml deleted file mode 100644 index b0c7eb6..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/configuration/global.xml +++ /dev/null @@ -1,45 +0,0 @@ - - - - - - - hcat_log_dir - /var/log/webhcat - WebHCat Log Dir. - - - hcat_pid_dir - /var/run/webhcat - WebHCat Pid Dir. - - - hcat_user - hcat - HCat User. - - - webhcat_user - hcat - WebHCat User. - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/metainfo.xml deleted file mode 100644 index 8e78530..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HCATALOG/metainfo.xml +++ /dev/null @@ -1,30 +0,0 @@ - - - - root - This is comment for HCATALOG service - 0.11.0.1.3.3.0 - - - - HCAT - CLIENT - - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/configuration/hdfs-site.xml index ac76122..1fc6c59 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/configuration/hdfs-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/configuration/hdfs-site.xml @@ -22,7 +22,7 @@ - + dfs.name.dir @@ -49,7 +49,7 @@ true - + dfs.datanode.socket.write.timeout 0 DFS Client write socket timeout @@ -66,7 +66,7 @@ dfs.block.local-path-access.user hbase the user who is allowed to perform short - circuit reads. + circuit reads. true @@ -75,11 +75,11 @@ dfs.data.dir /hadoop/hdfs/data Determines where on the local filesystem an DFS data node - should store its blocks. If this is a comma-delimited - list of directories, then data will be stored in all named - directories, typically on different devices. - Directories that do not exist are ignored. - + should store its blocks. If this is a comma-delimited + list of directories, then data will be stored in all named + directories, typically on different devices. + Directories that do not exist are ignored. + true @@ -87,32 +87,32 @@ dfs.hosts.exclude /etc/hadoop/conf/dfs.exclude Names a file that contains a list of hosts that are - not permitted to connect to the namenode. The full pathname of the - file must be specified. If the value is empty, no hosts are - excluded. + not permitted to connect to the namenode. The full pathname of the + file must be specified. If the value is empty, no hosts are + excluded. dfs.hosts /etc/hadoop/conf/dfs.include Names a file that contains a list of hosts that are - permitted to connect to the namenode. The full pathname of the file - must be specified. If the value is empty, all hosts are - permitted. + permitted to connect to the namenode. The full pathname of the file + must be specified. If the value is empty, all hosts are + permitted. dfs.replication.max 50 Maximal block replication. - + dfs.replication 3 Default block replication. - + @@ -125,21 +125,21 @@ dfs.safemode.threshold.pct 1.0f - Specifies the percentage of blocks that should satisfy - the minimal replication requirement defined by dfs.replication.min. - Values less than or equal to 0 mean not to start in safe mode. - Values greater than 1 will make safe mode permanent. - + Specifies the percentage of blocks that should satisfy + the minimal replication requirement defined by dfs.replication.min. + Values less than or equal to 0 mean not to start in safe mode. + Values greater than 1 will make safe mode permanent. + dfs.balance.bandwidthPerSec 6250000 - Specifies the maximum amount of bandwidth that each datanode - can utilize for the balancing purpose in term of - the number of bytes per second. - + Specifies the maximum amount of bandwidth that each datanode + can utilize for the balancing purpose in term of + the number of bytes per second. + @@ -191,133 +191,133 @@ dfs.http.address localhost:50070 -The name of the default file system. Either the -literal string "local" or a host:port for NDFS. -true - - - -dfs.datanode.du.reserved - -1073741824 -Reserved space in bytes per volume. Always leave this much space free for non dfs use. - - - - -dfs.datanode.ipc.address -0.0.0.0:8010 - -The datanode ipc server address and port. -If the port is 0 then the server will start on a free port. - - - - -dfs.blockreport.initialDelay -120 -Delay for first block report in seconds. - - - -dfs.datanode.du.pct -0.85f -When calculating remaining space, only use this percentage of the real available space - - - - -dfs.namenode.handler.count -40 -The number of server threads for the namenode. - - - -dfs.datanode.max.xcievers -4096 -PRIVATE CONFIG VARIABLE - - - - - -dfs.umaskmode -077 - -The octal umask used when creating files and directories. - - - - -dfs.web.ugi - -gopher,gopher -The user account used by the web interface. -Syntax: USERNAME,GROUP1,GROUP2, ... - - - - -dfs.permissions -true - -If "true", enable permission checking in HDFS. -If "false", permission checking is turned off, -but all other behavior is unchanged. -Switching from one parameter value to the other does not change the mode, -owner or group of files or directories. - - - - -dfs.permissions.supergroup -hdfs -The name of the group of super-users. - - - -dfs.namenode.handler.count -100 -Added to grow Queue size so that more client connections are allowed - - - -ipc.server.max.response.size -5242880 - - -dfs.block.access.token.enable -true - -If "true", access tokens are used as capabilities for accessing datanodes. -If "false", no access tokens are checked on accessing datanodes. - - - - -dfs.namenode.kerberos.principal - - -Kerberos principal name for the NameNode - - - - -dfs.secondary.namenode.kerberos.principal - + The name of the default file system. Either the + literal string "local" or a host:port for NDFS. + true + + + + dfs.datanode.du.reserved + + 1073741824 + Reserved space in bytes per volume. Always leave this much space free for non dfs use. + + + + + dfs.datanode.ipc.address + 0.0.0.0:8010 - Kerberos principal name for the secondary NameNode. + The datanode ipc server address and port. + If the port is 0 then the server will start on a free port. + + dfs.blockreport.initialDelay + 120 + Delay for first block report in seconds. + + + + dfs.datanode.du.pct + 0.85f + When calculating remaining space, only use this percentage of the real available space + + + + + dfs.namenode.handler.count + 40 + The number of server threads for the namenode. + + + + dfs.datanode.max.xcievers + 4096 + PRIVATE CONFIG VARIABLE + + + + + + dfs.umaskmode + 077 + + The octal umask used when creating files and directories. + + + + + dfs.web.ugi + + gopher,gopher + The user account used by the web interface. + Syntax: USERNAME,GROUP1,GROUP2, ... + + + + + dfs.permissions + true + + If "true", enable permission checking in HDFS. + If "false", permission checking is turned off, + but all other behavior is unchanged. + Switching from one parameter value to the other does not change the mode, + owner or group of files or directories. + + + + + dfs.permissions.supergroup + hdfs + The name of the group of super-users. + + + + dfs.namenode.handler.count + 100 + Added to grow Queue size so that more client connections are allowed + + + + ipc.server.max.response.size + 5242880 + + + dfs.block.access.token.enable + true + + If "true", access tokens are used as capabilities for accessing datanodes. + If "false", no access tokens are checked on accessing datanodes. + + + + + dfs.namenode.kerberos.principal + + + Kerberos principal name for the NameNode + + - + + dfs.secondary.namenode.kerberos.principal + + + Kerberos principal name for the secondary NameNode. + + + + + dfs.namenode.kerberos.https.principal - The Kerberos principal for the host that the NameNode runs on. + The Kerberos principal for the host that the NameNode runs on. @@ -363,84 +363,84 @@ Kerberos principal name for the NameNode dfs.datanode.kerberos.principal - - The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. + + The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. dfs.namenode.keytab.file - - Combined keytab file containing the namenode service and host principals. + + Combined keytab file containing the namenode service and host principals. dfs.secondary.namenode.keytab.file - - Combined keytab file containing the namenode service and host principals. + + Combined keytab file containing the namenode service and host principals. dfs.datanode.keytab.file - - The filename of the keytab file for the DataNode. + + The filename of the keytab file for the DataNode. dfs.https.port 50470 - The https port where namenode binds + The https port where namenode binds dfs.https.address localhost:50470 - The https address where namenode binds + The https address where namenode binds dfs.datanode.data.dir.perm 750 -The permissions that should be there on dfs.data.dir -directories. The datanode will not come up if the permissions are -different on existing dfs.data.dir directories. If the directories -don't exist, they will be created with this permission. - - - - dfs.access.time.precision - 0 - The access time for HDFS file is precise upto this value. - The default value is 1 hour. Setting a value of 0 disables - access times for HDFS. - - - - - dfs.cluster.administrators - hdfs - ACL for who all can view the default servlets in the HDFS - - - - ipc.server.read.threadpool.size - 5 - - - - - dfs.datanode.failed.volumes.tolerated - 0 - Number of failed disks datanode would tolerate - + The permissions that should be there on dfs.data.dir + directories. The datanode will not come up if the permissions are + different on existing dfs.data.dir directories. If the directories + don't exist, they will be created with this permission. + + + + dfs.access.time.precision + 0 + The access time for HDFS file is precise upto this value. + The default value is 1 hour. Setting a value of 0 disables + access times for HDFS. + + + + + dfs.cluster.administrators + hdfs + ACL for who all can view the default servlets in the HDFS + + + + ipc.server.read.threadpool.size + 5 + + + + + dfs.datanode.failed.volumes.tolerated + 0 + Number of failed disks datanode would tolerate + dfs.namenode.avoid.read.stale.datanode http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/metainfo.xml index 0bbab3e..009acae 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HDFS/metainfo.xml @@ -16,35 +16,131 @@ limitations under the License. --> - root - Apache Hadoop Distributed File System - 1.2.0.1.3.3.0 + 2.0 + + + HDFS + Apache Hadoop Distributed File System + 1.2.0.1.3.3.0 - + - NAMENODE - MASTER + NAMENODE + MASTER + 1 + + + PYTHON + 600 + + + + DECOMMISSION + + + PYTHON + 600 + + + - DATANODE - SLAVE + DATANODE + SLAVE + 1+ + + + PYTHON + 600 + - SECONDARY_NAMENODE - MASTER + SECONDARY_NAMENODE + MASTER + 1 + + + PYTHON + 600 + - HDFS_CLIENT - CLIENT + HDFS_CLIENT + CLIENT + 0+ + + + PYTHON + 600 + - - - core-site - global - hdfs-site - hadoop-policy - + + + + any + + + rpm + lzo + + + rpm + hadoop + + + rpm + hadoop-libhdfs + + + rpm + hadoop-native + + + rpm + hadoop-pipes + + + rpm + hadoop-sbin + + + rpm + hadoop-lzo + + + rpm + hadoop-lzo-native + + + rpm + snappy + + + rpm + snappy-devel + + + rpm + ambari-log4j + + + + + + + PYTHON + 300 + + + + core-site + global + hdfs-site + hadoop-policy + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/global.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/global.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/global.xml index d9adc80..ae7f586 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/global.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/global.xml @@ -121,5 +121,28 @@ hive Hive User. + + + + + hcat_log_dir + /var/log/webhcat + WebHCat Log Dir. + + + hcat_pid_dir + /etc/run/webhcat + WebHCat Pid Dir. + + + hcat_user + hcat + HCat User. + + + webhcat_user + hcat + WebHCat User. + http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/hive-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/hive-site.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/hive-site.xml index 24de30b..29ed54e 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/hive-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/configuration/hive-site.xml @@ -58,21 +58,21 @@ limitations under the License. hive.metastore.sasl.enabled If true, the metastore thrift interface will be secured with SASL. - Clients must authenticate with Kerberos. + Clients must authenticate with Kerberos. hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore - thrift server's service principal. + thrift server's service principal. hive.metastore.kerberos.principal The service principal for the metastore thrift server. The special - string _HOST will be replaced automatically with the correct host name. + string _HOST will be replaced automatically with the correct host name. @@ -115,7 +115,7 @@ limitations under the License. hive.security.authorization.manager org.apache.hcatalog.security.HdfsAuthorizationProvider the hive client authorization manager class name. - The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. + The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/metainfo.xml index afeaae1..0a0f8fa 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HIVE/metainfo.xml @@ -16,30 +16,171 @@ limitations under the License. --> - root - Data warehouse system for ad-hoc queries & analysis of large datasets and table & storage management service - 0.11.0.1.3.3.0 + 2.0 + + + HIVE + Data warehouse system for ad-hoc queries & analysis of large datasets and table & storage management service + 0.11.0.1.3.3.0 + - - HIVE_METASTORE - MASTER + HIVE_METASTORE + MASTER + + 1 + + true + HIVE/HIVE_SERVER + + + + PYTHON + 600 + + + + HIVE_SERVER + MASTER + 1 + + + ZOOKEEPER/ZOOKEEPER_SERVER + cluster + + true + HIVE/HIVE_SERVER + + + + + + PYTHON + + + - HIVE_SERVER - MASTER + MYSQL_SERVER + MASTER + + 1 + + true + HIVE/HIVE_SERVER + + + + PYTHON + + - MYSQL_SERVER - MASTER + HIVE_CLIENT + CLIENT + 0+ + + + PYTHON + + + + + + any + + + rpm + hive + + + rpm + mysql-connector-java + + + rpm + mysql + + + + + centos6 + + + rpm + mysql-server + + + + + centos5 + + + rpm + mysql-server + + + + + suse + + + rpm + mysql-client + + + + + + + + PYTHON + 300 + + + + hive-site + global + + + + + HCATALOG + This is comment for HCATALOG service + 0.11.0.1.3.3.0 + - HIVE_CLIENT - CLIENT + HCAT + CLIENT + + + PYTHON + - - - global - hive-site - + + + + any + + + rpm + hcatalog + + + + + + + PYTHON + 300 + + + + global + + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HUE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HUE/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HUE/metainfo.xml index ba580ca..0a6b59e 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HUE/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/HUE/metainfo.xml @@ -25,6 +25,7 @@ HUE_SERVER MASTER + 1 http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/configuration/mapred-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/configuration/mapred-site.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/configuration/mapred-site.xml index c4f6e39..1db37a8 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/configuration/mapred-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/configuration/mapred-site.xml @@ -22,7 +22,7 @@ - + io.sort.mb @@ -50,25 +50,25 @@ No description - + - - mapred.tasktracker.tasks.sleeptime-before-sigkill - 250 - Normally, this is the amount of time before killing - processes, and the recommended-default is 5.000 seconds - a value of - 5000 here. In this case, we are using it solely to blast tasks before - killing them, and killing them very quickly (1/4 second) to guarantee - that we do not leave VMs around for later jobs. - - + + mapred.tasktracker.tasks.sleeptime-before-sigkill + 250 + Normally, this is the amount of time before killing + processes, and the recommended-default is 5.000 seconds - a value of + 5000 here. In this case, we are using it solely to blast tasks before + killing them, and killing them very quickly (1/4 second) to guarantee + that we do not leave VMs around for later jobs. + + mapred.job.tracker.handler.count 50 - The number of server threads for the JobTracker. This should be roughly - 4% of the number of tasktracker nodes. + The number of server threads for the JobTracker. This should be roughly + 4% of the number of tasktracker nodes. @@ -104,8 +104,8 @@ - mapreduce.cluster.administrators - hadoop + mapreduce.cluster.administrators + hadoop @@ -135,14 +135,14 @@ mapred.map.tasks.speculative.execution false If true, then multiple instances of some map tasks - may be executed in parallel. + may be executed in parallel. mapred.reduce.tasks.speculative.execution false If true, then multiple instances of some reduce tasks - may be executed in parallel. + may be executed in parallel. @@ -154,29 +154,29 @@ mapred.inmem.merge.threshold 1000 The threshold, in terms of the number of files - for the in-memory merge process. When we accumulate threshold number of files - we initiate the in-memory merge and spill to disk. A value of 0 or less than - 0 indicates we want to DON'T have any threshold and instead depend only on - the ramfs's memory consumption to trigger the merge. - + for the in-memory merge process. When we accumulate threshold number of files + we initiate the in-memory merge and spill to disk. A value of 0 or less than + 0 indicates we want to DON'T have any threshold and instead depend only on + the ramfs's memory consumption to trigger the merge. + mapred.job.shuffle.merge.percent 0.66 The usage threshold at which an in-memory merge will be - initiated, expressed as a percentage of the total memory allocated to - storing in-memory map outputs, as defined by - mapred.job.shuffle.input.buffer.percent. - + initiated, expressed as a percentage of the total memory allocated to + storing in-memory map outputs, as defined by + mapred.job.shuffle.input.buffer.percent. + mapred.job.shuffle.input.buffer.percent 0.7 The percentage of memory to be allocated from the maximum heap - size to storing map outputs during the shuffle. - + size to storing map outputs during the shuffle. + @@ -187,13 +187,13 @@ - - mapred.output.compression.type - BLOCK - If the job outputs are to compressed as SequenceFiles, how should - they be compressed? Should be one of NONE, RECORD or BLOCK. - - + + mapred.output.compression.type + BLOCK + If the job outputs are to compressed as SequenceFiles, how should + they be compressed? Should be one of NONE, RECORD or BLOCK. + + @@ -210,7 +210,7 @@ mapred.jobtracker.restart.recover false "true" to enable (job) recovery upon restart, - "false" to start afresh + "false" to start afresh @@ -218,20 +218,20 @@ mapred.job.reduce.input.buffer.percent 0.0 The percentage of memory- relative to the maximum heap size- to - retain map outputs during the reduce. When the shuffle is concluded, any - remaining map outputs in memory must consume less than this threshold before - the reduce can begin. - + retain map outputs during the reduce. When the shuffle is concluded, any + remaining map outputs in memory must consume less than this threshold before + the reduce can begin. + - - mapreduce.reduce.input.limit - 10737418240 - The limit on the input size of the reduce. (This value - is 10 Gb.) If the estimated input size of the reduce is greater than - this value, job is failed. A value of -1 means that there is no limit - set. - + + mapreduce.reduce.input.limit + 10737418240 + The limit on the input size of the reduce. (This value + is 10 Gb.) If the estimated input size of the reduce is greater than + this value, job is failed. A value of -1 means that there is no limit + set. + @@ -245,9 +245,9 @@ mapred.task.timeout 600000 The number of milliseconds before a task will be - terminated if it neither reads an input, writes an output, nor - updates its status string. - + terminated if it neither reads an input, writes an output, nor + updates its status string. + @@ -259,9 +259,9 @@ mapred.task.tracker.task-controller org.apache.hadoop.mapred.DefaultTaskController - - TaskController which is used to launch and manage task execution. - + + TaskController which is used to launch and manage task execution. + @@ -279,7 +279,6 @@ mapred.child.java.opts -server -Xmx${ambari.mapred.child.java.opts.memory}m -Djava.net.preferIPv4Stack=true - Java options for the TaskTracker child processes @@ -295,7 +294,7 @@ mapred.cluster.reduce.memory.mb 2048 - The virtual memory size of a single Reduce slot in the MapReduce framework + The virtual memory size of a single Reduce slot in the MapReduce framework @@ -331,147 +330,147 @@ - - mapred.hosts - /etc/hadoop/conf/mapred.include - - Names a file that contains the list of nodes that may - connect to the jobtracker. If the value is empty, all hosts are - permitted. - - - - - mapred.hosts.exclude - /etc/hadoop/conf/mapred.exclude - - Names a file that contains the list of hosts that - should be excluded by the jobtracker. If the value is empty, no - hosts are excluded. - - - - - mapred.max.tracker.blacklists - 16 - - if node is reported blacklisted by 16 successful jobs within timeout-window, it will be graylisted - - - - - mapred.healthChecker.script.path - /etc/hadoop/conf/health_check - - Directory path to view job status - - - - - mapred.healthChecker.interval - 135000 - - - - mapred.healthChecker.script.timeout - 60000 - - - - mapred.job.tracker.persist.jobstatus.active - false - Indicates if persistency of job status information is - active or not. - - - - - mapred.job.tracker.persist.jobstatus.hours - 1 - The number of hours job status information is persisted in DFS. - The job status information will be available after it drops of the memory - queue and between jobtracker restarts. With a zero value the job status - information is not persisted at all in DFS. - - - - - mapred.job.tracker.persist.jobstatus.dir - /mapred/jobstatus - The directory where the job status information is persisted - in a file system to be available after it drops of the memory queue and - between jobtracker restarts. - - - - - mapred.jobtracker.retirejob.check - 10000 - - - - mapred.jobtracker.retirejob.interval - 21600000 - - - - mapred.job.tracker.history.completed.location - /mapred/history/done - No description - - - - mapred.task.maxvmem - - true - No description - - - - mapred.jobtracker.maxtasks.per.job - -1 - true - The maximum number of tasks for a single job. - A value of -1 indicates that there is no maximum. - - - - mapreduce.fileoutputcommitter.marksuccessfuljobs - false - - - - mapred.userlog.retain.hours - 24 - - The maximum time, in hours, for which the user-logs are to be retained after the job completion. - - - - - mapred.job.reuse.jvm.num.tasks - 1 - - How many tasks to run per jvm. If set to -1, there is no limit - - true - - - - mapreduce.jobtracker.kerberos.principal - - + + mapred.hosts + /etc/hadoop/conf/mapred.include + + Names a file that contains the list of nodes that may + connect to the jobtracker. If the value is empty, all hosts are + permitted. + + + + + mapred.hosts.exclude + /etc/hadoop/conf/mapred.exclude + + Names a file that contains the list of hosts that + should be excluded by the jobtracker. If the value is empty, no + hosts are excluded. + + + + + mapred.max.tracker.blacklists + 16 + + if node is reported blacklisted by 16 successful jobs within timeout-window, it will be graylisted + + + + + mapred.healthChecker.script.path + /etc/hadoop/conf/health_check + + Directory path to view job status + + + + + mapred.healthChecker.interval + 135000 + + + + mapred.healthChecker.script.timeout + 60000 + + + + mapred.job.tracker.persist.jobstatus.active + false + Indicates if persistency of job status information is + active or not. + + + + + mapred.job.tracker.persist.jobstatus.hours + 1 + The number of hours job status information is persisted in DFS. + The job status information will be available after it drops of the memory + queue and between jobtracker restarts. With a zero value the job status + information is not persisted at all in DFS. + + + + + mapred.job.tracker.persist.jobstatus.dir + /mapred/jobstatus + The directory where the job status information is persisted + in a file system to be available after it drops of the memory queue and + between jobtracker restarts. + + + + + mapred.jobtracker.retirejob.check + 10000 + + + + mapred.jobtracker.retirejob.interval + 21600000 + + + + mapred.job.tracker.history.completed.location + /mapred/history/done + No description + + + + mapred.task.maxvmem + + true + No description + + + + mapred.jobtracker.maxtasks.per.job + -1 + true + The maximum number of tasks for a single job. + A value of -1 indicates that there is no maximum. + + + + mapreduce.fileoutputcommitter.marksuccessfuljobs + false + + + + mapred.userlog.retain.hours + 24 + + The maximum time, in hours, for which the user-logs are to be retained after the job completion. + + + + + mapred.job.reuse.jvm.num.tasks + 1 + + How many tasks to run per jvm. If set to -1, there is no limit + + true + + + + mapreduce.jobtracker.kerberos.principal + + JT user name key. - - + + - - mapreduce.tasktracker.kerberos.principal - - - tt user name key. "_HOST" is replaced by the host name of the task tracker. - - + + mapreduce.tasktracker.kerberos.principal + + + tt user name key. "_HOST" is replaced by the host name of the task tracker. + + @@ -481,54 +480,54 @@ - - mapreduce.jobtracker.keytab.file - - - The keytab for the jobtracker principal. - + + mapreduce.jobtracker.keytab.file + + + The keytab for the jobtracker principal. + - + - - mapreduce.tasktracker.keytab.file - + + mapreduce.tasktracker.keytab.file + The filename of the keytab for the task tracker - + - - mapred.task.tracker.http.address - - Http address for task tracker. - + + mapred.task.tracker.http.address + + Http address for task tracker. + - - mapreduce.jobtracker.staging.root.dir - /user - The Path prefix for where the staging directories should be placed. The next level is always the user's - name. It is a path in the default file system. - + + mapreduce.jobtracker.staging.root.dir + /user + The Path prefix for where the staging directories should be placed. The next level is always the user's + name. It is a path in the default file system. + - - mapreduce.tasktracker.group - hadoop - The group that the task controller uses for accessing the task controller. The mapred user must be a member and users should *not* be members. + + mapreduce.tasktracker.group + hadoop + The group that the task controller uses for accessing the task controller. The mapred user must be a member and users should *not* be members. - + mapreduce.jobtracker.split.metainfo.maxsize 50000000 true - If the size of the split metainfo file is larger than this, the JobTracker will fail the job during - initialize. - + If the size of the split metainfo file is larger than this, the JobTracker will fail the job during + initialize. + mapreduce.history.server.embedded false Should job history server be embedded within Job tracker -process + process true @@ -543,61 +542,60 @@ process mapreduce.jobhistory.kerberos.principal - + Job history user name key. (must map to same user as JT -user) + user) - - mapreduce.jobhistory.keytab.file + + mapreduce.jobhistory.keytab.file - - The keytab for the job history server principal. - - - - mapred.jobtracker.blacklist.fault-timeout-window - 180 - - 3-hour sliding window (value is in minutes) - - - - - mapred.jobtracker.blacklist.fault-bucket-width - 15 - - 15-minute bucket size (value is in minutes) - - - - - mapred.queue.names - default - Comma separated list of queues configured for this jobtracker. - + + The keytab for the job history server principal. + + + mapred.jobtracker.blacklist.fault-timeout-window + 180 + + 3-hour sliding window (value is in minutes) + + + + + mapred.jobtracker.blacklist.fault-bucket-width + 15 + + 15-minute bucket size (value is in minutes) + + + + + mapred.queue.names + default + Comma separated list of queues configured for this jobtracker. + - - mapreduce.jobhistory.intermediate-done-dir - /mr-history/tmp - - Directory where history files are written by MapReduce jobs. - - - - - mapreduce.jobhistory.done-dir - /mr-history/done - - Directory where history files are managed by the MR JobHistory Server. - - - - - mapreduce.jobhistory.webapp.address - localhost:19888 - Enter your JobHistoryServer hostname. - + + mapreduce.jobhistory.intermediate-done-dir + /mr-history/tmp + + Directory where history files are written by MapReduce jobs. + + + + + mapreduce.jobhistory.done-dir + /mr-history/done + + Directory where history files are managed by the MR JobHistory Server. + + + + + mapreduce.jobhistory.webapp.address + localhost:19888 + Enter your JobHistoryServer hostname. + http://git-wip-us.apache.org/repos/asf/ambari/blob/92583535/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/metainfo.xml index 2493a13..71783d7 100644 --- a/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/metainfo.xml +++ b/ambari-server/src/main/resources/stacks/HDP/1.3.3/services/MAPREDUCE/metainfo.xml @@ -15,30 +15,88 @@ See the License for the specific language governing permissions and limitations under the License. --> - - mapred - Apache Hadoop Distributed Processing Framework - 1.2.0.1.3.3.0 - + + 2.0 + + + MAPREDUCE + Apache Hadoop Distributed Processing Framework + 1.2.0.1.3.3.0 + - JOBTRACKER - MASTER + JOBTRACKER + MASTER + 1 + + + PYTHON + 600 + + + + DECOMMISSION + + + PYTHON + 600 + + + - TASKTRACKER - SLAVE + TASKTRACKER + SLAVE + 1+ + + + PYTHON + 600 + - MAPREDUCE_CLIENT - CLIENT + MAPREDUCE_CLIENT + CLIENT + 0+ + + + PYTHON + 600 + + + + + HISTORYSERVER + MASTER + 1 + + true + MAPREDUCE/JOBTRACKER + + + + PYTHON + 600 + - - - core-site - global - mapred-site - + + + + + PYTHON + 300 + + + + capacity-scheduler + core-site + global + mapred-site + mapred-queue-acls + + + +