Return-Path: X-Original-To: apmail-incubator-ambari-commits-archive@minotaur.apache.org Delivered-To: apmail-incubator-ambari-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A7FA79BD3 for ; Wed, 22 May 2013 22:27:19 +0000 (UTC) Received: (qmail 64235 invoked by uid 500); 22 May 2013 22:27:19 -0000 Delivered-To: apmail-incubator-ambari-commits-archive@incubator.apache.org Received: (qmail 64156 invoked by uid 500); 22 May 2013 22:27:19 -0000 Mailing-List: contact ambari-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@incubator.apache.org Delivered-To: mailing list ambari-commits@incubator.apache.org Received: (qmail 64140 invoked by uid 99); 22 May 2013 22:27:19 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 May 2013 22:27:19 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,NORMAL_HTTP_TO_IP X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 22 May 2013 22:27:15 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id DE27423889EB; Wed, 22 May 2013 22:26:55 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r1485469 [1/9] - in /incubator/ambari/trunk: ./ ambari-web/app/assets/data/wizard/stack/hdp/version/ ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/ ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/ ambari-web/app/assets/d... Date: Wed, 22 May 2013 22:26:53 -0000 To: ambari-commits@incubator.apache.org From: srimanth@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20130522222655.DE27423889EB@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: srimanth Date: Wed May 22 22:26:52 2013 New Revision: 1485469 URL: http://svn.apache.org/r1485469 Log: AMBARI-2188. Update mock json data for Test mode. (srimanth) Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/1.2.1.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/2.0.1.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HBASE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HCATALOG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HDFS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HIVE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/MAPREDUCE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/OOZIE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/WEBHCAT.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/ZOOKEEPER.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/GANGLIA.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/HBASE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/HCATALOG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/HDFS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/HIVE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/HUE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/MAPREDUCE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/NAGIOS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/OOZIE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/PIG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/SQOOP.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/WEBHCAT.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/ZOOKEEPER.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.3.0/global.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/GANGLIA.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/HBASE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/HCATALOG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/HDFS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/HIVE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/HUE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/MAPREDUCEv2.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/NAGIOS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/OOZIE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/PIG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/SQOOP.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/TEZ.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/WEBHCAT.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/YARN.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/ZOOKEEPER.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version2.0.1/global.json Removed: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/HBASE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/HCATALOG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/HDFS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/HIVE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/MAPREDUCE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/OOZIE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/WEBHCAT.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version122/ZOOKEEPER.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/HBASE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/HCATALOG.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/HDFS.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/HIVE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/HUE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/MAPREDUCE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/OOZIE.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/WEBHCAT.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/ZOOKEEPER.json incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version130/global.json Modified: incubator/ambari/trunk/CHANGES.txt incubator/ambari/trunk/ambari-web/app/controllers/wizard.js incubator/ambari/trunk/ambari-web/app/controllers/wizard/step3_controller.js incubator/ambari/trunk/ambari-web/app/utils/ajax.js incubator/ambari/trunk/ambari-web/app/utils/config.js Modified: incubator/ambari/trunk/CHANGES.txt URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/CHANGES.txt?rev=1485469&r1=1485468&r2=1485469&view=diff ============================================================================== --- incubator/ambari/trunk/CHANGES.txt (original) +++ incubator/ambari/trunk/CHANGES.txt Wed May 22 22:26:52 2013 @@ -306,6 +306,8 @@ Trunk (unreleased changes): IMPROVEMENTS + AMBARI-2188. Update mock json data for Test mode. (srimanth) + AMBARI-2169. Going from Hosts page to Host Details page and back should preserve the filters, sort order, and pagination. (yusaku) Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/1.2.1.json URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/1.2.1.json?rev=1485469&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/1.2.1.json (added) +++ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/1.2.1.json Wed May 22 22:26:52 2013 @@ -0,0 +1,137 @@ +{ + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices?fields=StackServices", + "items" : [ + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/WEBHCAT", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "WEBHCAT", + "stack_name" : "HDP", + "comments" : "This is comment for WEBHCAT service", + "service_version" : "0.5.0" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/GANGLIA", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "GANGLIA", + "stack_name" : "HDP", + "comments" : "Ganglia Metrics Collection system", + "service_version" : "3.2.0" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/NAGIOS", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "NAGIOS", + "stack_name" : "HDP", + "comments" : "Nagios Monitoring and Alerting system", + "service_version" : "3.2.3" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE", + "StackServices" : { + "user_name" : "mapred", + "stack_version" : "1.2.1", + "service_name" : "HBASE", + "stack_name" : "HDP", + "comments" : "Non-relational distributed database and centralized service for configuration management & synchronization", + "service_version" : "0.94.5" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/SQOOP", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "SQOOP", + "stack_name" : "HDP", + "comments" : "Tool for transferring bulk data between Apache Hadoop and structured data stores such as relational databases", + "service_version" : "1.4.2" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "HDFS", + "stack_name" : "HDP", + "comments" : "Apache Hadoop Distributed File System", + "service_version" : "1.1.2" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/MAPREDUCE", + "StackServices" : { + "user_name" : "mapred", + "stack_version" : "1.2.1", + "service_name" : "MAPREDUCE", + "stack_name" : "HDP", + "comments" : "Apache Hadoop Distributed Processing Framework", + "service_version" : "1.1.2" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/PIG", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "PIG", + "stack_name" : "HDP", + "comments" : "Scripting platform for analyzing large datasets", + "service_version" : "0.10.1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/ZOOKEEPER", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "ZOOKEEPER", + "stack_name" : "HDP", + "comments" : "This is comment for ZOOKEEPER service", + "service_version" : "3.4.5" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/OOZIE", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "OOZIE", + "stack_name" : "HDP", + "comments" : "System for workflow coordination and execution of Apache Hadoop jobs", + "service_version" : "3.2.0" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HCATALOG", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "HCATALOG", + "stack_name" : "HDP", + "comments" : "This is comment for HCATALOG service", + "service_version" : "0.5.0" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE", + "StackServices" : { + "user_name" : "root", + "stack_version" : "1.2.1", + "service_name" : "HIVE", + "stack_name" : "HDP", + "comments" : "Data warehouse system for ad-hoc queries & analysis of large datasets and table & storage management service", + "service_version" : "0.10.0" + } + } + ] +} \ No newline at end of file Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/2.0.1.json URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/2.0.1.json?rev=1485469&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/2.0.1.json (added) +++ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version/2.0.1.json Wed May 22 22:26:52 2013 @@ -0,0 +1,148 @@ +{ + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices?fields=StackServices", + "items" : [ + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/MAPREDUCEv2", + "StackServices" : { + "user_name" : "mapred", + "stack_version" : "2.0.1", + "service_name" : "MAPREDUCEv2", + "stack_name" : "HDP", + "comments" : "Apache Hadoop NextGen MapReduce (client libraries)", + "service_version" : "2.0.3.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/OOZIE", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "OOZIE", + "stack_name" : "HDP", + "comments" : "System for workflow coordination and execution of Apache Hadoop jobs", + "service_version" : "3.3.1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/PIG", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "PIG", + "stack_name" : "HDP", + "comments" : "Scripting platform for analyzing large datasets", + "service_version" : "0.10.1.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/HCATALOG", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "HCATALOG", + "stack_name" : "HDP", + "comments" : "This is comment for HCATALOG service", + "service_version" : "0.5.0.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/WEBHCAT", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "WEBHCAT", + "stack_name" : "HDP", + "comments" : "This is comment for WEBHCAT service", + "service_version" : "0.5.0" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/ZOOKEEPER", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "ZOOKEEPER", + "stack_name" : "HDP", + "comments" : "This is comment for ZOOKEEPER service", + "service_version" : "3.4.5.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/GANGLIA", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "GANGLIA", + "stack_name" : "HDP", + "comments" : "Ganglia Metrics Collection system", + "service_version" : "3.2.0" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/HBASE", + "StackServices" : { + "user_name" : "mapred", + "stack_version" : "2.0.1", + "service_name" : "HBASE", + "stack_name" : "HDP", + "comments" : "Non-relational distributed database and centralized service for configuration management & synchronization", + "service_version" : "0.94.5.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/HIVE", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "HIVE", + "stack_name" : "HDP", + "comments" : "Data warehouse system for ad-hoc queries & analysis of large datasets and table & storage management service", + "service_version" : "0.10.0.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/YARN", + "StackServices" : { + "user_name" : "mapred", + "stack_version" : "2.0.1", + "service_name" : "YARN", + "stack_name" : "HDP", + "comments" : "Apache Hadoop NextGen MapReduce (YARN)", + "service_version" : "2.0.3.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/NAGIOS", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "NAGIOS", + "stack_name" : "HDP", + "comments" : "Nagios Monitoring and Alerting system", + "service_version" : "3.2.3" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/TEZ", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "TEZ", + "stack_name" : "HDP", + "comments" : "Tez is the next generation Hadoop Query Processing framework written on top of YARN", + "service_version" : "0.1.0.22-1" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/2.0.1/stackServices/HDFS", + "StackServices" : { + "user_name" : "root", + "stack_version" : "2.0.1", + "service_name" : "HDFS", + "stack_name" : "HDP", + "comments" : "Apache Hadoop Distributed File System", + "service_version" : "2.0.3.22-1" + } + } + ] +} \ No newline at end of file Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HBASE.json URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HBASE.json?rev=1485469&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HBASE.json (added) +++ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HBASE.json Wed May 22 22:26:52 2013 @@ -0,0 +1,113 @@ +{ + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations?fields=*", + "items" : [ + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/hbase.master.lease.thread.wakefrequency", + "StackConfigurations" : { + "property_description" : "The interval between checks for expired region server leases.\n This value has been reduced due to the other reduced values above so that\n the master will notice a dead region server sooner. The default is 15 seconds.\n ", + "property_value" : "3000", + "stack_version" : "1.2.1", + "property_name" : "hbase.master.lease.thread.wakefrequency", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/hbase.superuser", + "StackConfigurations" : { + "property_description" : "List of users or groups (comma-separated), who are allowed\n full privileges, regardless of stored ACLs, across the cluster.\n Only used when HBase security is enabled.\n ", + "property_value" : "hbase", + "stack_version" : "1.2.1", + "property_name" : "hbase.superuser", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/security.client.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for HRegionInterface protocol implementations (ie. \n clients talking to HRegionServers)\n The ACL is a comma-separated list of user and group names. The user and \n group list is separated by a blank. For e.g. \"alice,bob users,wheel\". \n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.client.protocol.acl", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/security.admin.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for HMasterInterface protocol implementation (ie. \n clients talking to HMaster for admin operations).\n The ACL is a comma-separated list of user and group names. The user and \n group list is separated by a blank. For e.g. \"alice,bob users,wheel\". \n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.admin.protocol.acl", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/security.masterregion.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for HMasterRegionInterface protocol implementations\n (for HRegionServers communicating with HMaster)\n The ACL is a comma-separated list of user and group names. The user and \n group list is separated by a blank. For e.g. \"alice,bob users,wheel\". \n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.masterregion.protocol.acl", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/hbase.zookeeper.useMulti", + "StackConfigurations" : { + "property_description" : "Instructs HBase to make use of ZooKeeper's multi-update functionality.\n This allows certain ZooKeeper operations to complete more quickly and prevents some issues\n with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).·\n IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+\n and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will\n not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).\n ", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "hbase.zookeeper.useMulti", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/hbase.zookeeper.property.clientPort", + "StackConfigurations" : { + "property_description" : "Property from ZooKeeper's config zoo.cfg.\n The port at which the clients will connect.\n ", + "property_value" : "2181", + "stack_version" : "1.2.1", + "property_name" : "hbase.zookeeper.property.clientPort", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/hbase.cluster.distributed", + "StackConfigurations" : { + "property_description" : "The mode the cluster will be in. Possible values are\n false for standalone mode and true for distributed mode. If\n false, startup will run all HBase and ZooKeeper daemons together\n in the one JVM.\n ", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "hbase.cluster.distributed", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HBASE/configurations/hbase.regionserver.optionalcacheflushinterval", + "StackConfigurations" : { + "property_description" : "\n Amount of time to wait since the last time a region was flushed before\n invoking an optional cache flush. Default 60,000.\n ", + "property_value" : "10000", + "stack_version" : "1.2.1", + "property_name" : "hbase.regionserver.optionalcacheflushinterval", + "service_name" : "HBASE", + "stack_name" : "HDP", + "type" : "hbase-site.xml" + } + } + ] +} \ No newline at end of file Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HCATALOG.json URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HCATALOG.json?rev=1485469&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HCATALOG.json (added) +++ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HCATALOG.json Wed May 22 22:26:52 2013 @@ -0,0 +1,4 @@ +{ + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HCATALOG/configurations?fields=*", + "items" : [ ] +} \ No newline at end of file Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HDFS.json URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HDFS.json?rev=1485469&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HDFS.json (added) +++ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HDFS.json Wed May 22 22:26:52 2013 @@ -0,0 +1,533 @@ +{ + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations?fields=*", + "items" : [ + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.client.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for ClientProtocol, which is used by user code\n via the DistributedFileSystem.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.client.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.balance.bandwidthPerSec", + "StackConfigurations" : { + "property_description" : "\n Specifies the maximum amount of bandwidth that each datanode\n can utilize for the balancing purpose in term of\n the number of bytes per second.\n ", + "property_value" : "6250000", + "stack_version" : "1.2.1", + "property_name" : "dfs.balance.bandwidthPerSec", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.block.size", + "StackConfigurations" : { + "property_description" : "The default block size for new files.", + "property_value" : "134217728", + "stack_version" : "1.2.1", + "property_name" : "dfs.block.size", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.secondary.https.port", + "StackConfigurations" : { + "property_description" : "The https port where secondary-namenode binds", + "property_value" : "50490", + "stack_version" : "1.2.1", + "property_name" : "dfs.secondary.https.port", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/fs.checkpoint.size", + "StackConfigurations" : { + "property_description" : "The size of the current edit log (in bytes) that triggers\n a periodic checkpoint even if the fs.checkpoint.period hasn't expired.\n ", + "property_value" : "536870912", + "stack_version" : "1.2.1", + "property_name" : "fs.checkpoint.size", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/fs.checkpoint.period", + "StackConfigurations" : { + "property_description" : "The number of seconds between two periodic checkpoints.\n ", + "property_value" : "21600", + "stack_version" : "1.2.1", + "property_name" : "fs.checkpoint.period", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.datanode.max.xcievers", + "StackConfigurations" : { + "property_description" : "PRIVATE CONFIG VARIABLE", + "property_value" : "4096", + "stack_version" : "1.2.1", + "property_name" : "dfs.datanode.max.xcievers", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.permissions.supergroup", + "StackConfigurations" : { + "property_description" : "The name of the group of super-users.", + "property_value" : "hdfs", + "stack_version" : "1.2.1", + "property_name" : "dfs.permissions.supergroup", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.access.time.precision", + "StackConfigurations" : { + "property_description" : "The access time for HDFS file is precise upto this value.\n The default value is 1 hour. Setting a value of 0 disables\n access times for HDFS.\n ", + "property_value" : "0", + "stack_version" : "1.2.1", + "property_name" : "dfs.access.time.precision", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/webinterface.private.actions", + "StackConfigurations" : { + "property_description" : " If set to true, the web interfaces of JT and NN may contain\n actions, such as kill job, delete file, etc., that should\n not be exposed to public. Enable this option if the interfaces\n are only reachable by those who have the right authorization.\n ", + "property_value" : "false", + "stack_version" : "1.2.1", + "property_name" : "webinterface.private.actions", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.web.ugi", + "StackConfigurations" : { + "property_description" : "The user account used by the web interface.\nSyntax: USERNAME,GROUP1,GROUP2, ...\n", + "property_value" : "gopher,gopher", + "stack_version" : "1.2.1", + "property_name" : "dfs.web.ugi", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.umaskmode", + "StackConfigurations" : { + "property_description" : "\nThe octal umask used when creating files and directories.\n", + "property_value" : "077", + "stack_version" : "1.2.1", + "property_name" : "dfs.umaskmode", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.datanode.socket.write.timeout", + "StackConfigurations" : { + "property_description" : "DFS Client write socket timeout", + "property_value" : "0", + "stack_version" : "1.2.1", + "property_name" : "dfs.datanode.socket.write.timeout", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.block.access.token.enable", + "StackConfigurations" : { + "property_description" : "\nIf \"true\", access tokens are used as capabilities for accessing datanodes.\nIf \"false\", no access tokens are checked on accessing datanodes.\n", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "dfs.block.access.token.enable", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.task.umbilical.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for TaskUmbilicalProtocol, used by the map and reduce\n tasks to communicate with the parent tasktracker.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.task.umbilical.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.inter.tracker.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for InterTrackerProtocol, used by the tasktrackers to\n communicate with the jobtracker.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.inter.tracker.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.datanode.du.pct", + "StackConfigurations" : { + "property_description" : "When calculating remaining space, only use this percentage of the real available space\n", + "property_value" : "0.85f", + "stack_version" : "1.2.1", + "property_name" : "dfs.datanode.du.pct", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/io.file.buffer.size", + "StackConfigurations" : { + "property_description" : "The size of buffer for use in sequence files.\n The size of this buffer should probably be a multiple of hardware\n page size (4096 on Intel x86), and it determines how much data is\n buffered during read and write operations.", + "property_value" : "131072", + "stack_version" : "1.2.1", + "property_name" : "io.file.buffer.size", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.inter.datanode.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for InterDatanodeProtocol, the inter-datanode protocol\n for updating generation timestamp.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.inter.datanode.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.permissions", + "StackConfigurations" : { + "property_description" : "\nIf \"true\", enable permission checking in HDFS.\nIf \"false\", permission checking is turned off,\nbut all other behavior is unchanged.\nSwitching from one parameter value to the other does not change the mode,\nowner or group of files or directories.\n", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "dfs.permissions", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/ipc.client.connect.max.retries", + "StackConfigurations" : { + "property_description" : "Defines the maximum number of retries for IPC connections.", + "property_value" : "50", + "stack_version" : "1.2.1", + "property_name" : "ipc.client.connect.max.retries", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.namenode.handler.count", + "StackConfigurations" : { + "property_description" : "Added to grow Queue size so that more client connections are allowed", + "property_value" : "100", + "stack_version" : "1.2.1", + "property_name" : "dfs.namenode.handler.count", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.job.submission.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for JobSubmissionProtocol, used by job clients to\n communciate with the jobtracker for job submission, querying job status etc.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.job.submission.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.blockreport.initialDelay", + "StackConfigurations" : { + "property_description" : "Delay for first block report in seconds.", + "property_value" : "120", + "stack_version" : "1.2.1", + "property_name" : "dfs.blockreport.initialDelay", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.heartbeat.interval", + "StackConfigurations" : { + "property_description" : "Determines datanode heartbeat interval in seconds.", + "property_value" : "3", + "stack_version" : "1.2.1", + "property_name" : "dfs.heartbeat.interval", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/ipc.client.connection.maxidletime", + "StackConfigurations" : { + "property_description" : "The maximum time after which a client will bring down the\n connection to the server.\n ", + "property_value" : "30000", + "stack_version" : "1.2.1", + "property_name" : "ipc.client.connection.maxidletime", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/io.compression.codecs", + "StackConfigurations" : { + "property_description" : "A list of the compression codec classes that can be used\n for compression/decompression.", + "property_value" : "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.SnappyCodec", + "stack_version" : "1.2.1", + "property_name" : "io.compression.codecs", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/ipc.server.max.response.size", + "StackConfigurations" : { + "property_description" : null, + "property_value" : "5242880", + "stack_version" : "1.2.1", + "property_name" : "ipc.server.max.response.size", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.namenode.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for NamenodeProtocol, the protocol used by the secondary\n namenode to communicate with the namenode.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.namenode.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/ipc.server.read.threadpool.size", + "StackConfigurations" : { + "property_description" : null, + "property_value" : "5", + "stack_version" : "1.2.1", + "property_name" : "ipc.server.read.threadpool.size", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.datanode.ipc.address", + "StackConfigurations" : { + "property_description" : "\nThe datanode ipc server address and port.\nIf the port is 0 then the server will start on a free port.\n", + "property_value" : "0.0.0.0:8010", + "stack_version" : "1.2.1", + "property_name" : "dfs.datanode.ipc.address", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.cluster.administrators", + "StackConfigurations" : { + "property_description" : "ACL for who all can view the default servlets in the HDFS", + "property_value" : " hdfs", + "stack_version" : "1.2.1", + "property_name" : "dfs.cluster.administrators", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/fs.trash.interval", + "StackConfigurations" : { + "property_description" : "Number of minutes between trash checkpoints.\n If zero, the trash feature is disabled.\n ", + "property_value" : "360", + "stack_version" : "1.2.1", + "property_name" : "fs.trash.interval", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/ipc.client.idlethreshold", + "StackConfigurations" : { + "property_description" : "Defines the threshold number of connections after which\n connections will be inspected for idleness.\n ", + "property_value" : "8000", + "stack_version" : "1.2.1", + "property_name" : "ipc.client.idlethreshold", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.datanode.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for DatanodeProtocol, which is used by datanodes to\n communicate with the namenode.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.datanode.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.namenode.handler.count", + "StackConfigurations" : { + "property_description" : "The number of server threads for the namenode.", + "property_value" : "40", + "stack_version" : "1.2.1", + "property_name" : "dfs.namenode.handler.count", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.safemode.threshold.pct", + "StackConfigurations" : { + "property_description" : "\n Specifies the percentage of blocks that should satisfy\n the minimal replication requirement defined by dfs.replication.min.\n Values less than or equal to 0 mean not to start in safe mode.\n Values greater than 1 will make safe mode permanent.\n ", + "property_value" : "1.0f", + "stack_version" : "1.2.1", + "property_name" : "dfs.safemode.threshold.pct", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.replication.max", + "StackConfigurations" : { + "property_description" : "Maximal block replication.\n ", + "property_value" : "50", + "stack_version" : "1.2.1", + "property_name" : "dfs.replication.max", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/security.client.datanode.protocol.acl", + "StackConfigurations" : { + "property_description" : "ACL for ClientDatanodeProtocol, the client-to-datanode protocol\n for block recovery.\n The ACL is a comma-separated list of user and group names. The user and\n group list is separated by a blank. For e.g. \"alice,bob users,wheel\".\n A special value of \"*\" means all users are allowed.", + "property_value" : "*", + "stack_version" : "1.2.1", + "property_name" : "security.client.datanode.protocol.acl", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hadoop-policy.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/io.serializations", + "StackConfigurations" : { + "property_description" : null, + "property_value" : "org.apache.hadoop.io.serializer.WritableSerialization", + "stack_version" : "1.2.1", + "property_name" : "io.serializations", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/io.compression.codec.lzo.class", + "StackConfigurations" : { + "property_description" : "The implementation for lzo codec.", + "property_value" : "com.hadoop.compression.lzo.LzoCodec", + "stack_version" : "1.2.1", + "property_name" : "io.compression.codec.lzo.class", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.https.port", + "StackConfigurations" : { + "property_description" : "The https port where namenode binds", + "property_value" : "50470", + "stack_version" : "1.2.1", + "property_name" : "dfs.https.port", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/fs.checkpoint.edits.dir", + "StackConfigurations" : { + "property_description" : "Determines where on the local filesystem the DFS secondary\n name node should store the temporary edits to merge.\n If this is a comma-delimited list of directoires then teh edits is\n replicated in all of the directoires for redundancy.\n Default value is same as fs.checkpoint.dir\n ", + "property_value" : "${fs.checkpoint.dir}", + "stack_version" : "1.2.1", + "property_name" : "fs.checkpoint.edits.dir", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "core-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HDFS/configurations/dfs.datanode.failed.volumes.tolerated", + "StackConfigurations" : { + "property_description" : "Number of failed disks datanode would tolerate", + "property_value" : "0", + "stack_version" : "1.2.1", + "property_name" : "dfs.datanode.failed.volumes.tolerated", + "service_name" : "HDFS", + "stack_name" : "HDP", + "type" : "hdfs-site.xml" + } + } + ] +} \ No newline at end of file Added: incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HIVE.json URL: http://svn.apache.org/viewvc/incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HIVE.json?rev=1485469&view=auto ============================================================================== --- incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HIVE.json (added) +++ incubator/ambari/trunk/ambari-web/app/assets/data/wizard/stack/hdp/version1.2.1/HIVE.json Wed May 22 22:26:52 2013 @@ -0,0 +1,149 @@ +{ + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations?fields=*", + "items" : [ + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.metastore.client.socket.timeout", + "StackConfigurations" : { + "property_description" : "MetaStore Client socket timeout in seconds", + "property_value" : "60", + "stack_version" : "1.2.1", + "property_name" : "hive.metastore.client.socket.timeout", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.security.authorization.manager", + "StackConfigurations" : { + "property_description" : "the hive client authorization manager class name.\n The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. ", + "property_value" : "org.apache.hcatalog.security.HdfsAuthorizationProvider", + "stack_version" : "1.2.1", + "property_name" : "hive.security.authorization.manager", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.security.authorization.enabled", + "StackConfigurations" : { + "property_description" : "enable or disable the hive client authorization", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "hive.security.authorization.enabled", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.metastore.cache.pinobjtypes", + "StackConfigurations" : { + "property_description" : "List of comma separated metastore object types that should be pinned in the cache", + "property_value" : "Table,Database,Type,FieldSchema,Order", + "stack_version" : "1.2.1", + "property_name" : "hive.metastore.cache.pinobjtypes", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hadoop.clientside.fs.operations", + "StackConfigurations" : { + "property_description" : "FS operations are owned by client", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "hadoop.clientside.fs.operations", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/fs.hdfs.impl.disable.cache", + "StackConfigurations" : { + "property_description" : null, + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "fs.hdfs.impl.disable.cache", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.semantic.analyzer.factory.impl", + "StackConfigurations" : { + "property_description" : "controls which SemanticAnalyzerFactory implemenation class is used by CLI", + "property_value" : "org.apache.hivealog.cli.HCatSemanticAnalyzerFactory", + "stack_version" : "1.2.1", + "property_name" : "hive.semantic.analyzer.factory.impl", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.metastore.local", + "StackConfigurations" : { + "property_description" : "controls whether to connect to remove metastore server or\n open a new metastore server in Hive Client JVM", + "property_value" : "false", + "stack_version" : "1.2.1", + "property_name" : "hive.metastore.local", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.metastore.execute.setugi", + "StackConfigurations" : { + "property_description" : "In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored.", + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "hive.metastore.execute.setugi", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.metastore.warehouse.dir", + "StackConfigurations" : { + "property_description" : "location of default database for the warehouse", + "property_value" : "/apps/hive/warehouse", + "stack_version" : "1.2.1", + "property_name" : "hive.metastore.warehouse.dir", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/javax.jdo.option.ConnectionDriverName", + "StackConfigurations" : { + "property_description" : "Driver class name for a JDBC metastore", + "property_value" : "com.mysql.jdbc.Driver", + "stack_version" : "1.2.1", + "property_name" : "javax.jdo.option.ConnectionDriverName", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + }, + { + "href" : "http://192.168.56.101:8080/api/v1/stacks2/HDP/versions/1.2.1/stackServices/HIVE/configurations/hive.server2.enable.doAs", + "StackConfigurations" : { + "property_description" : null, + "property_value" : "true", + "stack_version" : "1.2.1", + "property_name" : "hive.server2.enable.doAs", + "service_name" : "HIVE", + "stack_name" : "HDP", + "type" : "hive-site.xml" + } + } + ] +} \ No newline at end of file