Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 386E4200B41 for ; Wed, 1 Jun 2016 19:28:55 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 37684160A4E; Wed, 1 Jun 2016 17:28:55 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 09992160A54 for ; Wed, 1 Jun 2016 19:28:52 +0200 (CEST) Received: (qmail 96765 invoked by uid 500); 1 Jun 2016 17:28:52 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 96387 invoked by uid 99); 1 Jun 2016 17:28:51 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Jun 2016 17:28:51 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 7CC41E967C; Wed, 1 Jun 2016 17:28:51 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: dmitriusan@apache.org To: commits@ambari.apache.org Date: Wed, 01 Jun 2016 17:29:01 -0000 Message-Id: <364d02fa66824ca59f78f3ce5c203844@git.apache.org> In-Reply-To: <4fa30a2bbf704b8b8fd6ff9832b0982f@git.apache.org> References: <4fa30a2bbf704b8b8fd6ff9832b0982f@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [11/51] [partial] ambari git commit: AMBARI-16272. Ambari Upgrade shouldn't automatically add stack configs. Fix default upgrade policy and script defaults (dlysnichenko) archived-at: Wed, 01 Jun 2016 17:28:55 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/8e7103a8/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HDFS/configuration/core-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HDFS/configuration/core-site.xml b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HDFS/configuration/core-site.xml index 07cb6f5..d01b06c 100644 --- a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HDFS/configuration/core-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HDFS/configuration/core-site.xml @@ -26,29 +26,29 @@ The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. - - + + io.serializations org.apache.hadoop.io.serializer.WritableSerialization - - + + io.compression.codecs A list of the compression codec classes that can be used for compression/decompression. - - + + io.compression.codec.lzo.class com.hadoop.compression.lzo.LzoCodec The implementation for lzo codec. - - + + @@ -58,8 +58,8 @@ The name of the default file system. Either the literal string "local" or a host:port for HDFS. true - - + + fs.trash.interval @@ -67,8 +67,8 @@ Number of minutes between trash checkpoints. If zero, the trash feature is disabled. - - + + fs.checkpoint.dir @@ -78,8 +78,8 @@ If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. - - + + fs.checkpoint.edits.dir @@ -90,16 +90,16 @@ replicated in all of the directoires for redundancy. Default value is same as fs.checkpoint.dir - - + + fs.checkpoint.period 21600 The number of seconds between two periodic checkpoints. - - + + fs.checkpoint.size @@ -107,8 +107,8 @@ The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired. - - + + @@ -117,8 +117,8 @@ Defines the threshold number of connections after which connections will be inspected for idleness. - - + + ipc.client.connection.maxidletime @@ -126,15 +126,15 @@ The maximum time after which a client will bring down the connection to the server. - - + + ipc.client.connect.max.retries 50 Defines the maximum number of retries for IPC connections. - - + + @@ -145,8 +145,8 @@ not be exposed to public. Enable this option if the interfaces are only reachable by those who have the right authorization. - - + + hadoop.security.authentication @@ -155,8 +155,8 @@ Set the authentication for the cluster. Valid values are: simple or kerberos. - - + + hadoop.security.authorization @@ -164,8 +164,8 @@ Enable authorization for different protocols. - - + + hadoop.security.auth_to_local @@ -208,8 +208,8 @@ If you want to treat all principals from APACHE.ORG with /admin as "admin", your RULE[2:$1%$2@$0](.%admin@APACHE.ORG)s/./admin/ DEFAULT - - + + @@ -226,8 +226,8 @@ If the port is 0 then the server will start on a free port. The octal umask used when creating files and directories. - - + + dfs.web.ugi @@ -236,8 +236,8 @@ The octal umask used when creating files and directories. The user account used by the web interface. Syntax: USERNAME,GROUP1,GROUP2, ... - - + + dfs.permissions @@ -249,28 +249,28 @@ but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. - - + + dfs.permissions.supergroup hdfs The name of the group of super-users. - - + + dfs.namenode.handler.count 100 Added to grow Queue size so that more client connections are allowed - - + + ipc.server.max.response.size 5242880 - - + + dfs.block.access.token.enable @@ -279,8 +279,8 @@ owner or group of files or directories. If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. - - + + dfs.namenode.kerberos.principal @@ -288,8 +288,8 @@ If "false", no access tokens are checked on accessing datanodes. Kerberos principal name for the NameNode - - + + dfs.secondary.namenode.kerberos.principal @@ -297,8 +297,8 @@ Kerberos principal name for the NameNode Kerberos principal name for the secondary NameNode. - - + + dfs.secondary.http.address Address of secondary namenode web server - - + + dfs.secondary.https.port 50490 The https port where secondary-namenode binds - - + + dfs.web.authentication.kerberos.principal @@ -340,8 +340,8 @@ Kerberos principal name for the NameNode The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPENGO specification. - - + + dfs.web.authentication.kerberos.keytab @@ -350,8 +350,8 @@ Kerberos principal name for the NameNode The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. - - + + dfs.datanode.kerberos.principal @@ -359,8 +359,8 @@ Kerberos principal name for the NameNode The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. - - + + dfs.namenode.keytab.file @@ -368,8 +368,8 @@ Kerberos principal name for the NameNode Combined keytab file containing the namenode service and host principals. - - + + dfs.secondary.namenode.keytab.file @@ -377,8 +377,8 @@ Kerberos principal name for the NameNode Combined keytab file containing the namenode service and host principals. - - + + dfs.datanode.keytab.file @@ -386,22 +386,22 @@ Kerberos principal name for the NameNode The filename of the keytab file for the DataNode. - - + + dfs.https.port 50470 The https port where namenode binds - - + + dfs.https.address The https address where namenode binds - - + + dfs.datanode.data.dir.perm @@ -410,8 +410,8 @@ Kerberos principal name for the NameNode directories. The datanode will not come up if the permissions are different on existing dfs.data.dir directories. If the directories don't exist, they will be created with this permission. - - + + dfs.access.time.precision @@ -420,28 +420,28 @@ don't exist, they will be created with this permission. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. - - + + dfs.cluster.administrators hdfs ACL for who all can view the default servlets in the HDFS - - + + ipc.server.read.threadpool.size 5 - - + + dfs.datanode.failed.volumes.tolerated 0 Number of failed disks datanode would tolerate - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/8e7103a8/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HIVE/configuration/hive-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HIVE/configuration/hive-site.xml b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HIVE/configuration/hive-site.xml index c20372a..91402b8 100644 --- a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HIVE/configuration/hive-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/HIVE/configuration/hive-site.xml @@ -22,128 +22,128 @@ limitations under the License. false controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM - - + + javax.jdo.option.ConnectionURL JDBC connect string for a JDBC metastore - - + + javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore - - + + javax.jdo.option.ConnectionUserName username to use against metastore database - - + + javax.jdo.option.ConnectionPassword password to use against metastore database - - + + hive.metastore.warehouse.dir /apps/hive/warehouse location of default database for the warehouse - - + + hive.metastore.sasl.enabled If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos. - - + + hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore thrift server's service principal. - - + + hive.metastore.kerberos.principal The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name. - - + + hive.metastore.cache.pinobjtypes Table,Database,Type,FieldSchema,Order List of comma separated metastore object types that should be pinned in the cache - - + + hive.metastore.uris URI for client to contact metastore server - - + + hadoop.clientside.fs.operations true FS operations are owned by client - - + + hive.metastore.client.socket.timeout 60 MetaStore Client socket timeout in seconds - - + + hive.metastore.execute.setugi true In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. - - + + hive.security.authorization.enabled true enable or disable the hive client authorization - - + + hive.security.authorization.manager org.apache.hcatalog.security.HdfsAuthorizationProvider the hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. - - + + hive.server2.enable.doAs true - - + + fs.hdfs.impl.disable.cache true - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/8e7103a8/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/capacity-scheduler.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/capacity-scheduler.xml b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/capacity-scheduler.xml index 5b270c2..17929cc 100644 --- a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/capacity-scheduler.xml +++ b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/capacity-scheduler.xml @@ -26,8 +26,8 @@ Maximum number of jobs in the system which can be initialized, concurrently, by the CapacityScheduler. - - + + mapred.capacity-scheduler.queue.default.capacity @@ -35,8 +35,8 @@ Percentage of the number of slots in the cluster that are to be available for jobs in this queue. - - + + mapred.capacity-scheduler.queue.default.maximum-capacity @@ -55,8 +55,8 @@ the max capacity would change. So if large no of nodes or racks get added to the cluster , max Capacity in absolute terms would increase accordingly. - - + + mapred.capacity-scheduler.queue.default.supports-priority @@ -64,8 +64,8 @@ If true, priorities of jobs will be taken into account in scheduling decisions. - - + + mapred.capacity-scheduler.queue.default.minimum-user-limit-percent @@ -81,8 +81,8 @@ or more users, no user can use more than 25% of the queue's resources. A value of 100 implies no user limits are imposed. - - + + mapred.capacity-scheduler.queue.default.user-limit-factor @@ -90,8 +90,8 @@ The multiple of the queue capacity which can be configured to allow a single user to acquire more slots. - - + + mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks @@ -100,8 +100,8 @@ which can be initialized concurrently. Once the queue's jobs exceed this limit they will be queued on disk. - - + + mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks-per-user @@ -110,8 +110,8 @@ user's jobs in the queue, which can be initialized concurrently. Once the user's jobs exceed this limit they will be queued on disk. - - + + mapred.capacity-scheduler.queue.default.init-accept-jobs-factor @@ -119,8 +119,8 @@ The multipe of (maximum-system-jobs * queue-capacity) used to determine the number of jobs which are accepted by the scheduler. - - + + @@ -131,8 +131,8 @@ If true, priorities of jobs will be taken into account in scheduling decisions by default in a job queue. - - + + mapred.capacity-scheduler.default-minimum-user-limit-percent @@ -140,8 +140,8 @@ The percentage of the resources limited to a particular user for the job queue at any given point of time by default. - - + + mapred.capacity-scheduler.default-user-limit-factor @@ -149,8 +149,8 @@ The default multiple of queue-capacity which is used to determine the amount of slots a single user can consume concurrently. - - + + mapred.capacity-scheduler.default-maximum-active-tasks-per-queue @@ -159,8 +159,8 @@ queue, which can be initialized concurrently. Once the queue's jobs exceed this limit they will be queued on disk. - - + + mapred.capacity-scheduler.default-maximum-active-tasks-per-user @@ -169,8 +169,8 @@ the user's jobs in the queue, which can be initialized concurrently. Once the user's jobs exceed this limit they will be queued on disk. - - + + mapred.capacity-scheduler.default-init-accept-jobs-factor @@ -178,8 +178,8 @@ The default multipe of (maximum-system-jobs * queue-capacity) used to determine the number of jobs which are accepted by the scheduler. - - + + @@ -188,8 +188,8 @@ The amount of time in miliseconds which is used to poll the job queues for jobs to initialize. - - + + mapred.capacity-scheduler.init-worker-threads @@ -202,7 +202,7 @@ is greater then number of threads would be equal to number of job queues. - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/8e7103a8/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml index 2b6307e..3f83f98 100644 --- a/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml +++ b/ambari-server/src/test/resources/stacks/HDP/1.2.0/services/MAPREDUCE/configuration/mapred-queue-acls.xml @@ -22,14 +22,14 @@ mapred.queue.default.acl-submit-job * - - + + mapred.queue.default.acl-administer-jobs * - - + +