Return-Path: X-Original-To: apmail-incubator-ambari-commits-archive@minotaur.apache.org Delivered-To: apmail-incubator-ambari-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 75641F233 for ; Mon, 1 Apr 2013 17:51:26 +0000 (UTC) Received: (qmail 81641 invoked by uid 500); 1 Apr 2013 17:51:26 -0000 Delivered-To: apmail-incubator-ambari-commits-archive@incubator.apache.org Received: (qmail 81604 invoked by uid 500); 1 Apr 2013 17:51:26 -0000 Mailing-List: contact ambari-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@incubator.apache.org Delivered-To: mailing list ambari-commits@incubator.apache.org Received: (qmail 81596 invoked by uid 99); 1 Apr 2013 17:51:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 01 Apr 2013 17:51:26 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 01 Apr 2013 17:51:10 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 531A323889BB; Mon, 1 Apr 2013 17:50:47 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r1463225 [1/3] - in /incubator/ambari/trunk: ./ ambari-server/src/main/resources/stacks/HDP/1.2.1/ ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/ ambari-server/src/main/resources/stacks/HDP/1.2.1/services/ ambari-server/src/main/r... Date: Mon, 01 Apr 2013 17:50:44 -0000 To: ambari-commits@incubator.apache.org From: smohanty@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20130401175047.531A323889BB@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: smohanty Date: Mon Apr 1 17:50:42 2013 New Revision: 1463225 URL: http://svn.apache.org/r1463225 Log: AMBARI-1757. Add support for Stack 1.2.2 to Ambari. (smohanty) Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/repoinfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/GANGLIA/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/GANGLIA/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-policy.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HCATALOG/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HCATALOG/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/core-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hadoop-policy.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hdfs-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/configuration/hive-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/capacity-scheduler.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/core-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/mapred-queue-acls.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/mapred-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/NAGIOS/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/NAGIOS/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/OOZIE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/OOZIE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/OOZIE/configuration/oozie-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/OOZIE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/PIG/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/PIG/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/PIG/configuration/pig.properties incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/PIG/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/SQOOP/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/SQOOP/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/WEBHCAT/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/WEBHCAT/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/WEBHCAT/configuration/webhcat-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/WEBHCAT/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/ZOOKEEPER/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/ZOOKEEPER/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/repos/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/repos/repoinfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/GANGLIA/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/GANGLIA/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HBASE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HBASE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HBASE/configuration/hbase-policy.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HBASE/configuration/hbase-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HBASE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HCATALOG/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HCATALOG/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HDFS/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HDFS/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HDFS/configuration/core-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HDFS/configuration/hadoop-policy.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HDFS/configuration/hdfs-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HDFS/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HIVE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HIVE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HIVE/configuration/hive-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/HIVE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/configuration/capacity-scheduler.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/configuration/core-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/configuration/mapred-queue-acls.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/configuration/mapred-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/MAPREDUCE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/NAGIOS/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/NAGIOS/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/OOZIE/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/OOZIE/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/OOZIE/configuration/oozie-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/OOZIE/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/PIG/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/PIG/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/PIG/configuration/pig.properties incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/PIG/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/SQOOP/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/SQOOP/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/WEBHCAT/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/WEBHCAT/configuration/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/WEBHCAT/configuration/webhcat-site.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/WEBHCAT/metainfo.xml incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/ZOOKEEPER/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.1/services/ZOOKEEPER/metainfo.xml Removed: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.2/ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDPLocal/1.2.2/ Modified: incubator/ambari/trunk/CHANGES.txt incubator/ambari/trunk/ambari-web/app/config.js Modified: incubator/ambari/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/CHANGES.txt?rev=1463225&r1=1463224&r2=1463225&view=diff ============================================================================== --- incubator/ambari/trunk/CHANGES.txt (original) +++ incubator/ambari/trunk/CHANGES.txt Mon Apr 1 17:50:42 2013 @@ -541,7 +541,9 @@ Trunk (unreleased changes): BUG FIXES - AMBARI-1749. set default heap size for zookeeper. (swagle) + AMBARI-1757. Add support for Stack 1.2.2 to Ambari. (smohanty) + + AMBARI-1749. Set default heap size for zookeeper. (swagle) AMBARI-1748. JDK option on the UI when used is not passed onto the global parameters. (srimanth) Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/repoinfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/repoinfo.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/repoinfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/repos/repoinfo.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,123 @@ + + + + + + http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + + HDP-epel + HDP-epel + + + + + + http://public-repo-1.hortonworks.com/HDP/centos5/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + + HDP-epel + HDP-epel + + + + + + http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + + HDP-epel + HDP-epel + + + + + + http://public-repo-1.hortonworks.com/HDP/centos5/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + + HDP-epel + HDP-epel + + + + + + http://public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + + HDP-epel + HDP-epel + + + + + + http://public-repo-1.hortonworks.com/HDP/centos5/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + + HDP-epel + HDP-epel + + + + + + http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.15/repos/suse11 + HDP-UTILS-1.1.0.15 + HDP-UTILS + + + + + + http://public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.2.1 + HDP-1.2.1 + HDP + + + http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.15/repos/suse11 + HDP-UTILS-1.1.0.15 + HDP-UTILS + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/GANGLIA/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/GANGLIA/metainfo.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/GANGLIA/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/GANGLIA/metainfo.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,40 @@ + + + + root + Ganglia Metrics Collection system + 3.2.0 + + + + GANGLIA_SERVER + MASTER + + + + GANGLIA_MONITOR + SLAVE + + + + MONITOR_WEBSERVER + MASTER + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-policy.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-policy.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-policy.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-policy.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,53 @@ + + + + + + + security.client.protocol.acl + * + ACL for HRegionInterface protocol implementations (ie. + clients talking to HRegionServers) + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.admin.protocol.acl + * + ACL for HMasterInterface protocol implementation (ie. + clients talking to HMaster for admin operations). + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.masterregion.protocol.acl + * + ACL for HMasterRegionInterface protocol implementations + (for HRegionServers communicating with HMaster) + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-site.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/configuration/hbase-site.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,355 @@ + + + + + + hbase.rootdir + + The directory shared by region servers and into + which HBase persists. The URL should be 'fully-qualified' + to include the filesystem scheme. For example, to specify the + HDFS directory '/hbase' where the HDFS instance's namenode is + running at namenode.example.org on port 9000, set this value to: + hdfs://namenode.example.org:9000/hbase. By default HBase writes + into /tmp. Change this configuration else all data will be lost + on machine restart. + + + + hbase.cluster.distributed + true + The mode the cluster will be in. Possible values are + false for standalone mode and true for distributed mode. If + false, startup will run all HBase and ZooKeeper daemons together + in the one JVM. + + + + hbase.tmp.dir + + Temporary directory on the local filesystem. + Change this setting to point to a location more permanent + than '/tmp' (The '/tmp' directory is often cleared on + machine restart). + + + + hbase.master.info.bindAddress + + The bind address for the HBase Master web UI + + + + hbase.master.info.port + + The port for the HBase Master web UI. + + + hbase.regionserver.info.port + + The port for the HBase RegionServer web UI. + + + hbase.regionserver.global.memstore.upperLimit + + Maximum size of all memstores in a region server before new + updates are blocked and flushes are forced. Defaults to 40% of heap + + + + hbase.regionserver.handler.count + + Count of RPC Listener instances spun up on RegionServers. + Same property is used by the Master for count of master handlers. + Default is 10. + + + + hbase.hregion.majorcompaction + + The time (in miliseconds) between 'major' compactions of all + HStoreFiles in a region. Default: 1 day. + Set to 0 to disable automated major compactions. + + + + hbase.master.lease.thread.wakefrequency + 3000 + The interval between checks for expired region server leases. + This value has been reduced due to the other reduced values above so that + the master will notice a dead region server sooner. The default is 15 seconds. + + + + hbase.regionserver.global.memstore.lowerLimit + + When memstores are being forced to flush to make room in + memory, keep flushing until we hit this mark. Defaults to 35% of heap. + This value equal to hbase.regionserver.global.memstore.upperLimit causes + the minimum possible flushing to occur when updates are blocked due to + memstore limiting. + + + + hbase.hregion.memstore.block.multiplier + + Block updates if memstore has hbase.hregion.memstore.block.multiplier + time hbase.hregion.flush.size bytes. Useful preventing + runaway memstore during spikes in update traffic. Without an + upper-bound, memstore fills such that when it flushes the + resultant flush files take a long time to compact or split, or + worse, we OOME + + + + hbase.hregion.memstore.flush.size + + + Memstore will be flushed to disk if size of the memstore + exceeds this number of bytes. Value is checked by a thread that runs + every hbase.server.thread.wakefrequency. + + + + hbase.hregion.memstore.mslab.enabled + + + Enables the MemStore-Local Allocation Buffer, + a feature which works to prevent heap fragmentation under + heavy write loads. This can reduce the frequency of stop-the-world + GC pauses on large heaps. + + + + hbase.hregion.max.filesize + + + Maximum HStoreFile size. If any one of a column families' HStoreFiles has + grown to exceed this value, the hosting HRegion is split in two. + Default: 1G. + + + + hbase.client.scanner.caching + + Number of rows that will be fetched when calling next + on a scanner if it is not served from (local, client) memory. Higher + caching values will enable faster scanners but will eat up more memory + and some calls of next may take longer and longer times when the cache is empty. + Do not set this value such that the time between invocations is greater + than the scanner timeout; i.e. hbase.regionserver.lease.period + + + + zookeeper.session.timeout + + ZooKeeper session timeout. + HBase passes this to the zk quorum as suggested maximum time for a + session (This setting becomes zookeeper's 'maxSessionTimeout'). See + http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions + "The client sends a requested timeout, the server responds with the + timeout that it can give the client. " In milliseconds. + + + + hbase.client.keyvalue.maxsize + + Specifies the combined maximum allowed size of a KeyValue + instance. This is to set an upper boundary for a single entry saved in a + storage file. Since they cannot be split it helps avoiding that a region + cannot be split any further because the data is too large. It seems wise + to set this to a fraction of the maximum region size. Setting it to zero + or less disables the check. + + + + hbase.hstore.compactionThreshold + + + If more than this number of HStoreFiles in any one HStore + (one HStoreFile is written per flush of memstore) then a compaction + is run to rewrite all HStoreFiles files as one. Larger numbers + put off compaction but when it runs, it takes longer to complete. + + + + hbase.hstore.blockingStoreFiles + + + If more than this number of StoreFiles in any one Store + (one StoreFile is written per flush of MemStore) then updates are + blocked for this HRegion until a compaction is completed, or + until hbase.hstore.blockingWaitTime has been exceeded. + + + + hfile.block.cache.size + + + Percentage of maximum heap (-Xmx setting) to allocate to block cache + used by HFile/StoreFile. Default of 0.25 means allocate 25%. + Set to 0 to disable but it's not recommended. + + + + + + hbase.master.keytab.file + + Full path to the kerberos keytab file to use for logging in + the configured HMaster server principal. + + + + hbase.master.kerberos.principal + + Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name + that should be used to run the HMaster process. The principal name should + be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname + portion, it will be replaced with the actual hostname of the running + instance. + + + + hbase.regionserver.keytab.file + + Full path to the kerberos keytab file to use for logging in + the configured HRegionServer server principal. + + + + hbase.regionserver.kerberos.principal + + Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name + that should be used to run the HRegionServer process. The principal name + should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the + hostname portion, it will be replaced with the actual hostname of the + running instance. An entry for this principal must exist in the file + specified in hbase.regionserver.keytab.file + + + + + + hbase.superuser + hbase + List of users or groups (comma-separated), who are allowed + full privileges, regardless of stored ACLs, across the cluster. + Only used when HBase security is enabled. + + + + + hbase.coprocessor.region.classes + + A comma-separated list of Coprocessors that are loaded by + default on all tables. For any override coprocessor method, these classes + will be called in order. After implementing your own Coprocessor, just put + it in HBase's classpath and add the fully qualified class name here. + A coprocessor can also be loaded on demand by setting HTableDescriptor. + + + + + hbase.coprocessor.master.classes + + A comma-separated list of + org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are + loaded by default on the active HMaster process. For any implemented + coprocessor methods, the listed classes will be called in order. After + implementing your own MasterObserver, just put it in HBase's classpath + and add the fully qualified class name here. + + + + + hbase.zookeeper.property.clientPort + 2181 + Property from ZooKeeper's config zoo.cfg. + The port at which the clients will connect. + + + + + + hbase.zookeeper.quorum + + Comma separated list of servers in the ZooKeeper Quorum. + For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". + By default this is set to localhost for local and pseudo-distributed modes + of operation. For a fully-distributed setup, this should be set to a full + list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh + this is the list of servers which we will start/stop ZooKeeper on. + + + + + + dfs.support.append + + Does HDFS allow appends to files? + This is an hdfs config. set in here so the hdfs client will do append support. + You must ensure that this config. is true serverside too when running hbase + (You will have to restart your cluster after setting it). + + + + + dfs.client.read.shortcircuit + + Enable/Disable short circuit read for your client. + Hadoop servers should be configured to allow short circuit read + for the hbase user for this to take effect + + + + + dfs.client.read.shortcircuit.skip.checksum + + Enable/disbale skipping the checksum check + + + + hbase.regionserver.optionalcacheflushinterval + 10000 + + Amount of time to wait since the last time a region was flushed before + invoking an optional cache flush. Default 60,000. + + + + hbase.zookeeper.useMulti + true + Instructs HBase to make use of ZooKeeper's multi-update functionality. + This allows certain ZooKeeper operations to complete more quickly and prevents some issues + with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).· + IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+ + and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will + not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495). + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/metainfo.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HBASE/metainfo.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,40 @@ + + + + mapred + Non-relational distributed database and centralized service for configuration management & synchronization + 0.94.5 + + + + HBASE_MASTER + MASTER + + + + HBASE_REGIONSERVER + SLAVE + + + + HBASE_CLIENT + CLIENT + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HCATALOG/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HCATALOG/metainfo.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HCATALOG/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HCATALOG/metainfo.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,30 @@ + + + + root + This is comment for HCATALOG service + 0.5.0 + + + + HCAT + CLIENT + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/core-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/core-site.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/core-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/core-site.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,251 @@ + + + + + + + + + + + + + io.file.buffer.size + 131072 + The size of buffer for use in sequence files. + The size of this buffer should probably be a multiple of hardware + page size (4096 on Intel x86), and it determines how much data is + buffered during read and write operations. + + + + io.serializations + org.apache.hadoop.io.serializer.WritableSerialization + + + + io.compression.codecs + + A list of the compression codec classes that can be used + for compression/decompression. + + + + io.compression.codec.lzo.class + com.hadoop.compression.lzo.LzoCodec + The implementation for lzo codec. + + + + + + fs.default.name + + + The name of the default file system. Either the + literal string "local" or a host:port for NDFS. + true + + + + fs.trash.interval + 360 + Number of minutes between trash checkpoints. + If zero, the trash feature is disabled. + + + + + fs.checkpoint.dir + + Determines where on the local filesystem the DFS secondary + name node should store the temporary images to merge. + If this is a comma-delimited list of directories then the image is + replicated in all of the directories for redundancy. + + + + + fs.checkpoint.edits.dir + ${fs.checkpoint.dir} + Determines where on the local filesystem the DFS secondary + name node should store the temporary edits to merge. + If this is a comma-delimited list of directoires then teh edits is + replicated in all of the directoires for redundancy. + Default value is same as fs.checkpoint.dir + + + + + fs.checkpoint.period + 21600 + The number of seconds between two periodic checkpoints. + + + + + fs.checkpoint.size + 536870912 + The size of the current edit log (in bytes) that triggers + a periodic checkpoint even if the fs.checkpoint.period hasn't expired. + + + + + + ipc.client.idlethreshold + 8000 + Defines the threshold number of connections after which + connections will be inspected for idleness. + + + + + ipc.client.connection.maxidletime + 30000 + The maximum time after which a client will bring down the + connection to the server. + + + + + ipc.client.connect.max.retries + 50 + Defines the maximum number of retries for IPC connections. + + + + + webinterface.private.actions + false + If set to true, the web interfaces of JT and NN may contain + actions, such as kill job, delete file, etc., that should + not be exposed to public. Enable this option if the interfaces + are only reachable by those who have the right authorization. + + + + + hadoop.security.authentication + + + Set the authentication for the cluster. Valid values are: simple or + kerberos. + + + + hadoop.security.authorization + + + Enable authorization for different protocols. + + + + + hadoop.security.auth_to_local + +The mapping from kerberos principal names to local OS user names. + So the default rule is just "DEFAULT" which takes all principals in your default domain to their first component. + "omalley@APACHE.ORG" and "omalley/admin@APACHE.ORG" to "omalley", if your default domain is APACHE.ORG. +The translations rules have 3 sections: + base filter substitution +The base consists of a number that represents the number of components in the principal name excluding the realm and the pattern for building the name from the sections of the principal name. The base uses $0 to mean the realm, $1 to mean the first component and $2 to mean the second component. + +[1:$1@$0] translates "omalley@APACHE.ORG" to "omalley@APACHE.ORG" +[2:$1] translates "omalley/admin@APACHE.ORG" to "omalley" +[2:$1%$2] translates "omalley/admin@APACHE.ORG" to "omalley%admin" + +The filter is a regex in parens that must the generated string for the rule to apply. + +"(.*%admin)" will take any string that ends in "%admin" +"(.*@ACME.COM)" will take any string that ends in "@ACME.COM" + +Finally, the substitution is a sed rule to translate a regex into a fixed string. + +"s/@ACME\.COM//" removes the first instance of "@ACME.COM". +"s/@[A-Z]*\.COM//" removes the first instance of "@" followed by a name followed by ".COM". +"s/X/Y/g" replaces all of the "X" in the name with "Y" + +So, if your default realm was APACHE.ORG, but you also wanted to take all principals from ACME.COM that had a single component "joe@ACME.COM", you'd do: + +RULE:[1:$1@$0](.@ACME.ORG)s/@.// +DEFAULT + +To also translate the names with a second component, you'd make the rules: + +RULE:[1:$1@$0](.@ACME.ORG)s/@.// +RULE:[2:$1@$0](.@ACME.ORG)s/@.// +DEFAULT + +If you want to treat all principals from APACHE.ORG with /admin as "admin", your rules would look like: + +RULE[2:$1%$2@$0](.%admin@APACHE.ORG)s/./admin/ +DEFAULT + + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hadoop-policy.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hadoop-policy.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hadoop-policy.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hadoop-policy.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,134 @@ + + + + + + + + + + security.client.protocol.acl + * + ACL for ClientProtocol, which is used by user code + via the DistributedFileSystem. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.client.datanode.protocol.acl + * + ACL for ClientDatanodeProtocol, the client-to-datanode protocol + for block recovery. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.datanode.protocol.acl + * + ACL for DatanodeProtocol, which is used by datanodes to + communicate with the namenode. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.inter.datanode.protocol.acl + * + ACL for InterDatanodeProtocol, the inter-datanode protocol + for updating generation timestamp. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.namenode.protocol.acl + * + ACL for NamenodeProtocol, the protocol used by the secondary + namenode to communicate with the namenode. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.inter.tracker.protocol.acl + * + ACL for InterTrackerProtocol, used by the tasktrackers to + communicate with the jobtracker. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.job.submission.protocol.acl + * + ACL for JobSubmissionProtocol, used by job clients to + communciate with the jobtracker for job submission, querying job status etc. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.task.umbilical.protocol.acl + * + ACL for TaskUmbilicalProtocol, used by the map and reduce + tasks to communicate with the parent tasktracker. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.admin.operations.protocol.acl + + ACL for AdminOperationsProtocol. Used for admin commands. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.refresh.usertogroups.mappings.protocol.acl + + ACL for RefreshUserMappingsProtocol. Used to refresh + users mappings. The ACL is a comma-separated list of user and + group names. The user and group list is separated by a blank. For + e.g. "alice,bob users,wheel". A special value of "*" means all + users are allowed. + + + + security.refresh.policy.protocol.acl + + ACL for RefreshAuthorizationPolicyProtocol, used by the + dfsadmin and mradmin commands to refresh the security policy in-effect. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hdfs-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hdfs-site.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hdfs-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/configuration/hdfs-site.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,415 @@ + + + + + + + + + + + + + dfs.name.dir + + + Determines where on the local filesystem the DFS name node + should store the name table. If this is a comma-delimited list + of directories then the name table is replicated in all of the + directories, for redundancy. + true + + + + dfs.support.append + + to enable dfs append + true + + + + dfs.webhdfs.enabled + + to enable webhdfs + true + + + + dfs.datanode.socket.write.timeout + 0 + DFS Client write socket timeout + + + + dfs.datanode.failed.volumes.tolerated + + #of failed disks dn would tolerate + true + + + + dfs.block.local-path-access.user + + the user who is allowed to perform short + circuit reads. + + true + + + + dfs.data.dir + + Determines where on the local filesystem an DFS data node + should store its blocks. If this is a comma-delimited + list of directories, then data will be stored in all named + directories, typically on different devices. + Directories that do not exist are ignored. + + true + + + + dfs.hosts.exclude + + Names a file that contains a list of hosts that are + not permitted to connect to the namenode. The full pathname of the + file must be specified. If the value is empty, no hosts are + excluded. + + + + dfs.hosts + + Names a file that contains a list of hosts that are + permitted to connect to the namenode. The full pathname of the file + must be specified. If the value is empty, all hosts are + permitted. + + + + dfs.replication.max + 50 + Maximal block replication. + + + + + dfs.replication + + Default block replication. + + + + + dfs.heartbeat.interval + 3 + Determines datanode heartbeat interval in seconds. + + + + dfs.safemode.threshold.pct + 1.0f + + Specifies the percentage of blocks that should satisfy + the minimal replication requirement defined by dfs.replication.min. + Values less than or equal to 0 mean not to start in safe mode. + Values greater than 1 will make safe mode permanent. + + + + + dfs.balance.bandwidthPerSec + 6250000 + + Specifies the maximum amount of bandwidth that each datanode + can utilize for the balancing purpose in term of + the number of bytes per second. + + + + + dfs.datanode.address + + + + + dfs.datanode.http.address + + + + + dfs.block.size + 134217728 + The default block size for new files. + + + + dfs.http.address + +The name of the default file system. Either the +literal string "local" or a host:port for NDFS. +true + + + +dfs.datanode.du.reserved + + +Reserved space in bytes per volume. Always leave this much space free for non dfs use. + + + + +dfs.datanode.ipc.address +0.0.0.0:8010 + +The datanode ipc server address and port. +If the port is 0 then the server will start on a free port. + + + + +dfs.blockreport.initialDelay +120 +Delay for first block report in seconds. + + + +dfs.datanode.du.pct +0.85f +When calculating remaining space, only use this percentage of the real available space + + + + +dfs.namenode.handler.count +40 +The number of server threads for the namenode. + + + +dfs.datanode.max.xcievers +4096 +PRIVATE CONFIG VARIABLE + + + + + +dfs.umaskmode +077 + +The octal umask used when creating files and directories. + + + + +dfs.web.ugi + +gopher,gopher +The user account used by the web interface. +Syntax: USERNAME,GROUP1,GROUP2, ... + + + + +dfs.permissions +true + +If "true", enable permission checking in HDFS. +If "false", permission checking is turned off, +but all other behavior is unchanged. +Switching from one parameter value to the other does not change the mode, +owner or group of files or directories. + + + + +dfs.permissions.supergroup +hdfs +The name of the group of super-users. + + + +dfs.namenode.handler.count +100 +Added to grow Queue size so that more client connections are allowed + + + +ipc.server.max.response.size +5242880 + + +dfs.block.access.token.enable +true + +If "true", access tokens are used as capabilities for accessing datanodes. +If "false", no access tokens are checked on accessing datanodes. + + + + +dfs.namenode.kerberos.principal + + +Kerberos principal name for the NameNode + + + + +dfs.secondary.namenode.kerberos.principal + + + Kerberos principal name for the secondary NameNode. + + + + + + + dfs.namenode.kerberos.https.principal + + The Kerberos principal for the host that the NameNode runs on. + + + + + dfs.secondary.namenode.kerberos.https.principal + + The Kerberos principal for the hostthat the secondary NameNode runs on. + + + + + + dfs.secondary.http.address + + Address of secondary namenode web server + + + + dfs.secondary.https.port + 50490 + The https port where secondary-namenode binds + + + + dfs.web.authentication.kerberos.principal + + + The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. + The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos + HTTP SPENGO specification. + + + + + dfs.web.authentication.kerberos.keytab + + + The Kerberos keytab file with the credentials for the + HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. + + + + + dfs.datanode.kerberos.principal + + + The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. + + + + + dfs.namenode.keytab.file + + + Combined keytab file containing the namenode service and host principals. + + + + + dfs.secondary.namenode.keytab.file + + + Combined keytab file containing the namenode service and host principals. + + + + + dfs.datanode.keytab.file + + + The filename of the keytab file for the DataNode. + + + + + dfs.https.port + 50470 + The https port where namenode binds + + + + + dfs.https.address + + The https address where namenode binds + + + + + dfs.datanode.data.dir.perm + +The permissions that should be there on dfs.data.dir +directories. The datanode will not come up if the permissions are +different on existing dfs.data.dir directories. If the directories +don't exist, they will be created with this permission. + + + + dfs.access.time.precision + 0 + The access time for HDFS file is precise upto this value. + The default value is 1 hour. Setting a value of 0 disables + access times for HDFS. + + + + + dfs.cluster.administrators + hdfs + ACL for who all can view the default servlets in the HDFS + + + + ipc.server.read.threadpool.size + 5 + + + + + dfs.datanode.failed.volumes.tolerated + 0 + Number of failed disks datanode would tolerate + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/metainfo.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HDFS/metainfo.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,46 @@ + + + + root + Apache Hadoop Distributed File System + 1.1.2 + + + + NAMENODE + MASTER + + + + DATANODE + SLAVE + + + + SECONDARY_NAMENODE + MASTER + + + + HDFS_CLIENT + CLIENT + + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/configuration/hive-site.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/configuration/hive-site.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/configuration/hive-site.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/configuration/hive-site.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,138 @@ + + + + + + + hive.metastore.local + false + controls whether to connect to remove metastore server or + open a new metastore server in Hive Client JVM + + + + javax.jdo.option.ConnectionURL + + JDBC connect string for a JDBC metastore + + + + javax.jdo.option.ConnectionDriverName + com.mysql.jdbc.Driver + Driver class name for a JDBC metastore + + + + javax.jdo.option.ConnectionUserName + + username to use against metastore database + + + + javax.jdo.option.ConnectionPassword + + password to use against metastore database + + + + hive.metastore.warehouse.dir + /apps/hive/warehouse + location of default database for the warehouse + + + + hive.metastore.sasl.enabled + + If true, the metastore thrift interface will be secured with SASL. + Clients must authenticate with Kerberos. + + + + hive.metastore.kerberos.keytab.file + + The path to the Kerberos Keytab file containing the metastore + thrift server's service principal. + + + + hive.metastore.kerberos.principal + + The service principal for the metastore thrift server. The special + string _HOST will be replaced automatically with the correct host name. + + + + hive.metastore.cache.pinobjtypes + Table,Database,Type,FieldSchema,Order + List of comma separated metastore object types that should be pinned in the cache + + + + hive.metastore.uris + + URI for client to contact metastore server + + + + hive.semantic.analyzer.factory.impl + org.apache.hivealog.cli.HCatSemanticAnalyzerFactory + controls which SemanticAnalyzerFactory implemenation class is used by CLI + + + + hadoop.clientside.fs.operations + true + FS operations are owned by client + + + + hive.metastore.client.socket.timeout + 60 + MetaStore Client socket timeout in seconds + + + + hive.metastore.execute.setugi + true + In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. + + + + hive.security.authorization.enabled + true + enable or disable the hive client authorization + + + + hive.security.authorization.manager + org.apache.hcatalog.security.HdfsAuthorizationProvider + the hive client authorization manager class name. + The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. + + + + hive.server2.enable.doAs + true + + + + fs.hdfs.impl.disable.cache + true + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/metainfo.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/metainfo.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/metainfo.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/HIVE/metainfo.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,43 @@ + + + + root + Data warehouse system for ad-hoc queries & analysis of large datasets and table & storage management service + 0.10.0 + + + + HIVE_METASTORE + MASTER + + + HIVE_SERVER + MASTER + + + MYSQL_SERVER + MASTER + + + HIVE_CLIENT + CLIENT + + + + + Added: incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/capacity-scheduler.xml URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/capacity-scheduler.xml?rev=1463225&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/capacity-scheduler.xml (added) +++ incubator/ambari/trunk/ambari-server/src/main/resources/stacks/HDP/1.2.1/services/MAPREDUCE/configuration/capacity-scheduler.xml Mon Apr 1 17:50:42 2013 @@ -0,0 +1,195 @@ + + + + + + + + + + + + + mapred.capacity-scheduler.maximum-system-jobs + 3000 + Maximum number of jobs in the system which can be initialized, + concurrently, by the CapacityScheduler. + + + + + mapred.capacity-scheduler.queue.default.capacity + 100 + Percentage of the number of slots in the cluster that are + to be available for jobs in this queue. + + + + + mapred.capacity-scheduler.queue.default.maximum-capacity + -1 + + maximum-capacity defines a limit beyond which a queue cannot use the capacity of the cluster. + This provides a means to limit how much excess capacity a queue can use. By default, there is no limit. + The maximum-capacity of a queue can only be greater than or equal to its minimum capacity. + Default value of -1 implies a queue can use complete capacity of the cluster. + + This property could be to curtail certain jobs which are long running in nature from occupying more than a + certain percentage of the cluster, which in the absence of pre-emption, could lead to capacity guarantees of + other queues being affected. + + One important thing to note is that maximum-capacity is a percentage , so based on the cluster's capacity + the max capacity would change. So if large no of nodes or racks get added to the cluster , max Capacity in + absolute terms would increase accordingly. + + + + + mapred.capacity-scheduler.queue.default.supports-priority + false + If true, priorities of jobs will be taken into + account in scheduling decisions. + + + + + mapred.capacity-scheduler.queue.default.minimum-user-limit-percent + 100 + Each queue enforces a limit on the percentage of resources + allocated to a user at any given time, if there is competition for them. + This user limit can vary between a minimum and maximum value. The former + depends on the number of users who have submitted jobs, and the latter is + set to this property value. For example, suppose the value of this + property is 25. If two users have submitted jobs to a queue, no single + user can use more than 50% of the queue resources. If a third user submits + a job, no single user can use more than 33% of the queue resources. With 4 + or more users, no user can use more than 25% of the queue's resources. A + value of 100 implies no user limits are imposed. + + + + + mapred.capacity-scheduler.queue.default.user-limit-factor + 1 + The multiple of the queue capacity which can be configured to + allow a single user to acquire more slots. + + + + + mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks + 200000 + The maximum number of tasks, across all jobs in the queue, + which can be initialized concurrently. Once the queue's jobs exceed this + limit they will be queued on disk. + + + + + mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks-per-user + 100000 + The maximum number of tasks per-user, across all the of the + user's jobs in the queue, which can be initialized concurrently. Once the + user's jobs exceed this limit they will be queued on disk. + + + + + mapred.capacity-scheduler.queue.default.init-accept-jobs-factor + 10 + The multipe of (maximum-system-jobs * queue-capacity) used to + determine the number of jobs which are accepted by the scheduler. + + + + + + + + mapred.capacity-scheduler.default-supports-priority + false + If true, priorities of jobs will be taken into + account in scheduling decisions by default in a job queue. + + + + + mapred.capacity-scheduler.default-minimum-user-limit-percent + 100 + The percentage of the resources limited to a particular user + for the job queue at any given point of time by default. + + + + + + mapred.capacity-scheduler.default-user-limit-factor + 1 + The default multiple of queue-capacity which is used to + determine the amount of slots a single user can consume concurrently. + + + + + mapred.capacity-scheduler.default-maximum-active-tasks-per-queue + 200000 + The default maximum number of tasks, across all jobs in the + queue, which can be initialized concurrently. Once the queue's jobs exceed + this limit they will be queued on disk. + + + + + mapred.capacity-scheduler.default-maximum-active-tasks-per-user + 100000 + The default maximum number of tasks per-user, across all the of + the user's jobs in the queue, which can be initialized concurrently. Once + the user's jobs exceed this limit they will be queued on disk. + + + + + mapred.capacity-scheduler.default-init-accept-jobs-factor + 10 + The default multipe of (maximum-system-jobs * queue-capacity) + used to determine the number of jobs which are accepted by the scheduler. + + + + + + mapred.capacity-scheduler.init-poll-interval + 5000 + The amount of time in miliseconds which is used to poll + the job queues for jobs to initialize. + + + + mapred.capacity-scheduler.init-worker-threads + 5 + Number of worker threads which would be used by + Initialization poller to initialize jobs in a set of queue. + If number mentioned in property is equal to number of job queues + then a single thread would initialize jobs in a queue. If lesser + then a thread would get a set of queues assigned. If the number + is greater then number of threads would be equal to number of + job queues. + + + +