Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 7DA82200AE4 for ; Thu, 9 Jun 2016 16:19:25 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 7C51B160A5C; Thu, 9 Jun 2016 14:19:25 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 68D17160A58 for ; Thu, 9 Jun 2016 16:19:23 +0200 (CEST) Received: (qmail 97028 invoked by uid 500); 9 Jun 2016 14:19:22 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 96879 invoked by uid 99); 9 Jun 2016 14:19:21 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Jun 2016 14:19:21 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 837C4E0159; Thu, 9 Jun 2016 14:19:21 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: dmitriusan@apache.org To: commits@ambari.apache.org Date: Thu, 09 Jun 2016 14:19:22 -0000 Message-Id: In-Reply-To: <2b0dc64e37f2466b9a5beb0fab583e35@git.apache.org> References: <2b0dc64e37f2466b9a5beb0fab583e35@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [02/94] ambari git commit: AMBARI-17112. Fixed implementation of on-ambari-upgrade support. Update all stack configuration xmls to pass validation (dlysnichenko) archived-at: Thu, 09 Jun 2016 14:19:25 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/global.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/global.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/global.xml index 746578e..d927606 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/global.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/global.xml @@ -24,245 +24,210 @@ namenode_host NameNode Host. - - + dfs_namenode_name_dir /hadoop/hdfs/namenode NameNode Directories. - - + snamenode_host Secondary NameNode. - - + dfs_namenode_checkpoint_dir /hadoop/hdfs/namesecondary Secondary NameNode checkpoint dir. - - + datanode_hosts List of Datanode Hosts. - - + dfs_datanode_data_dir /hadoop/hdfs/data Data directories for Data Nodes. - - + hdfs_log_dir_prefix /var/log/hadoop Hadoop Log Dir Prefix - - + hadoop_pid_dir_prefix /var/run/hadoop Hadoop PID Dir Prefix - - + dfs_webhdfs_enabled true WebHDFS enabled - - + hadoop_heapsize 1024 Hadoop maximum Java heap size - - + namenode_heapsize 1024 NameNode Java heap size - - + namenode_opt_newsize 200 Default size of Java new generation for NameNode (Java option -XX:NewSize) Note: The value of NameNode new generation size (default size of Java new generation for NameNode (Java option -XX:NewSize)) should be 1/8 of maximum heap size (-Xmx). Ensure that the value of the namenode_opt_newsize property is 1/8 the value of maximum heap size (-Xmx). - - + namenode_opt_maxnewsize 200 NameNode maximum new generation size - - + namenode_opt_permsize 128 NameNode permanent generation size - - + namenode_opt_maxpermsize 256 NameNode maximum permanent generation size - - + datanode_du_reserved 1073741824 Reserved space for HDFS - - + dtnode_heapsize 1024 DataNode maximum Java heap size - - + dfs_datanode_failed_volume_tolerated 0 DataNode volumes failure toleration - - + dfs_namenode_checkpoint_period 21600 HDFS Maximum Checkpoint Delay - - + fs_checkpoint_size 0.5 FS Checkpoint Size. - - + proxyuser_group users Proxy user group. - - + dfs_exclude HDFS Exclude hosts. - - + dfs_replication 3 Default Block Replication. - - + dfs_block_local_path_access_user hbase Default Block Replication. - - + dfs_datanode_address 50010 Port for datanode address. - - + dfs_datanode_http_address 50075 Port for datanode address. - - + dfs_datanode_data_dir_perm 750 Datanode dir perms. - - + security_enabled false Hadoop Security - - + kerberos_domain EXAMPLE.COM Kerberos realm. - - + kadmin_pw Kerberos realm admin password - - + keytab_path /etc/security/keytabs Kerberos keytab path. - - + keytab_path /etc/security/keytabs KeyTab Directory. - - + namenode_formatted_mark_dir /var/run/hadoop/hdfs/namenode/formatted/ Formatteed Mark Directory. - - + hdfs_user hdfs User and Groups. - - + lzo_enabled true LZO compression enabled - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hadoop-policy.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hadoop-policy.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hadoop-policy.xml index 93cc9ab..a31a481 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hadoop-policy.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hadoop-policy.xml @@ -26,8 +26,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.client.datanode.protocol.acl @@ -37,8 +36,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.datanode.protocol.acl @@ -48,8 +46,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.inter.datanode.protocol.acl @@ -59,8 +56,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.namenode.protocol.acl @@ -70,8 +66,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.inter.tracker.protocol.acl @@ -81,8 +76,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.job.client.protocol.acl @@ -92,8 +86,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.job.task.protocol.acl @@ -103,8 +96,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.admin.operations.protocol.acl @@ -113,8 +105,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.refresh.usertogroups.mappings.protocol.acl @@ -124,8 +115,7 @@ group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.refresh.policy.protocol.acl @@ -135,7 +125,6 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hdfs-site.xml index 8fdf3d7..4a7e98f 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hdfs-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HDFS/configuration/hdfs-site.xml @@ -28,32 +28,28 @@ of directories then the name table is replicated in all of the directories, for redundancy. true - - + dfs.support.append true to enable dfs append true - - + dfs.webhdfs.enabled true Whether to enable WebHDFS feature true - - + dfs.datanode.failed.volumes.tolerated 0 Number of failed disks a DataNode would tolerate before it stops offering service true - - + dfs.datanode.data.dir @@ -65,8 +61,7 @@ Directories that do not exist are ignored. true - - + dfs.hosts.exclude @@ -75,8 +70,7 @@ not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded. - - + @@ -290,8 +262,7 @@ The octal umask used when creating files and directories. - - + dfs.permissions.enabled @@ -303,22 +274,19 @@ Switching from one parameter value to the other does not change the mode, owner or group of files or directories. - - + dfs.permissions.superusergroup hdfs The name of the group of super-users. - - + dfs.namenode.handler.count 100 Added to grow Queue size so that more client connections are allowed - - + dfs.block.access.token.enable @@ -327,8 +295,7 @@ If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. - - + dfs.namenode.kerberos.principal @@ -336,8 +303,7 @@ Kerberos principal name for the NameNode - - + dfs.secondary.namenode.kerberos.principal @@ -345,8 +311,7 @@ Kerberos principal name for the secondary NameNode. - - + dfs.namenode.secondary.http-address localhost:50090 Address of secondary namenode web server - - + dfs.web.authentication.kerberos.principal @@ -381,8 +343,7 @@ The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPENGO specification. - - + dfs.web.authentication.kerberos.keytab @@ -391,8 +352,7 @@ The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. - - + dfs.datanode.kerberos.principal @@ -400,8 +360,7 @@ The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. - - + dfs.namenode.keytab.file @@ -409,8 +368,7 @@ Combined keytab file containing the namenode service and host principals. - - + dfs.secondary.namenode.keytab.file @@ -418,8 +376,7 @@ Combined keytab file containing the namenode service and host principals. - - + dfs.datanode.keytab.file @@ -427,15 +384,13 @@ The filename of the keytab file for the DataNode. - - + dfs.namenode.https-address localhost:50470 The https address where namenode binds - - + dfs.datanode.data.dir.perm @@ -444,8 +399,7 @@ directories. The datanode will not come up if the permissions are different on existing dfs.datanode.data.dir directories. If the directories don't exist, they will be created with this permission. - - + dfs.namenode.accesstime.precision @@ -454,15 +408,13 @@ The default value is 1 hour. Setting a value of 0 disables access times for HDFS. - - + dfs.cluster.administrators hdfs ACL for who all can view the default servlets in the HDFS - - + dfs.namenode.avoid.read.stale.datanode @@ -472,8 +424,7 @@ heartbeat messages have not been received by the namenode for more than a specified time interval. - - + dfs.namenode.avoid.write.stale.datanode @@ -483,8 +434,7 @@ heartbeat messages have not been received by the namenode for more than a specified time interval. - - + dfs.namenode.write.stale.datanode.ratio @@ -492,30 +442,26 @@ When the ratio of number stale datanodes to total datanodes marked is greater than this ratio, stop avoiding writing to stale nodes so as to prevent causing hotspots. - - + dfs.namenode.stale.datanode.interval 30000 Datanode is stale after not getting a heartbeat in this interval in ms - - + dfs.journalnode.http-address 0.0.0.0:8480 The address and port the JournalNode web UI listens on. If the port is 0 then the server will start on a free port. - - + dfs.journalnode.edits.dir /grid/0/hdfs/journal The path where the JournalNode daemon will store its local state. - - + @@ -524,21 +470,18 @@ This configuration parameter turns on short-circuit local reads. - - + dfs.client.read.shortcircuit.skip.checksum Enable/disbale skipping the checksum check - - + dfs.domain.socket.path /var/lib/hadoop-hdfs/dn_socket - - + dfs.client.read.shortcircuit.streams.cache.size @@ -549,15 +492,13 @@ more file descriptors, but potentially provide better performance on workloads involving lots of seeks. - - + dfs.namenode.name.dir.restore true Set to true to enable NameNode to attempt recovering a previously failed dfs.namenode.name.dir. When enabled, a recovery of any failed directory is attempted during checkpoint. - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HIVE/configuration/hive-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HIVE/configuration/hive-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HIVE/configuration/hive-site.xml index 7c5365b..c706178 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HIVE/configuration/hive-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/HIVE/configuration/hive-site.xml @@ -21,190 +21,163 @@ limitations under the License. ambari.hive.db.schema.name hive Database name used as the Hive Metastore - - + javax.jdo.option.ConnectionURL jdbc JDBC connect string for a JDBC metastore - - + javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore - - + javax.jdo.option.ConnectionUserName hive username to use against metastore database - - + javax.jdo.option.ConnectionPassword password to use against metastore database - - + hive.metastore.warehouse.dir /apps/hive/warehouse location of default database for the warehouse - - + hive.metastore.sasl.enabled If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos. - - + hive.metastore.kerberos.keytab.file The path to the Kerberos Keytab file containing the metastore thrift server's service principal. - - + hive.metastore.kerberos.principal The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name. - - + hive.metastore.cache.pinobjtypes Table,Database,Type,FieldSchema,Order List of comma separated metastore object types that should be pinned in the cache - - + hive.metastore.uris thrift://localhost:9083 URI for client to contact metastore server - - + hive.metastore.client.socket.timeout 60 MetaStore Client socket timeout in seconds - - + hive.metastore.execute.setugi true In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. - - + hive.security.authorization.enabled false enable or disable the hive client authorization - - + hive.security.authorization.manager org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider the hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. - - + hive.security.metastore.authorization.manager org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider The authorization manager class name to be used in the metastore for authorization. The user-defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider. - - + hive.security.authenticator.manager org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator Hive client authenticator manager class name. The user-defined authenticator class should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider. - - + hive.server2.enable.doAs true - - + fs.hdfs.impl.disable.cache true - - + fs.file.impl.disable.cache true - - + hive.enforce.bucketing true Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced. - - + hive.enforce.sorting true Whether sorting is enforced. If true, while inserting into the table, sorting is enforced. - - + hive.map.aggr true Whether to use map-side aggregation in Hive Group By queries. - - + hive.optimize.bucketmapjoin true - - + hive.optimize.bucketmapjoin.sortedmerge true - - + hive.mapred.reduce.tasks.speculative.execution false Whether speculative execution for reducers should be turned on. - - + hive.auto.convert.join true Whether Hive enable the optimization about converting common join into mapjoin based on the input file size. - - + hive.auto.convert.sortmerge.join @@ -212,14 +185,12 @@ limitations under the License. Will the join be automatically converted to a sort-merge join, if the joined tables pass the criteria for sort-merge join. - - + hive.auto.convert.sortmerge.join.noconditionaltask true - - + hive.auto.convert.join.noconditionaltask @@ -228,8 +199,7 @@ limitations under the License. size. If this paramater is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the specified size, the join is directly converted to a mapjoin (there is no conditional task). - - + hive.auto.convert.join.noconditionaltask.size @@ -238,8 +208,7 @@ limitations under the License. is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, the join is directly converted to a mapjoin(there is no conditional task). The default is 10MB. - - + hive.optimize.reducededuplication.min.reducer @@ -248,8 +217,7 @@ limitations under the License. That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR. The optimization will be disabled if number of reducers is less than specified value. - - + hive.optimize.mapjoin.mapreduce @@ -259,8 +227,7 @@ limitations under the License. job (for e.g a group by), each map-only job is merged with the following map-reduce job. - - + hive.mapjoin.bucket.cache.size @@ -269,20 +236,17 @@ limitations under the License. Size per reducer.The default is 1G, i.e if the input size is 10G, it will use 10 reducers. - - + hive.vectorized.execution.enabled false - - + hive.optimize.reducededuplication true - - + hive.optimize.index.filter @@ -290,7 +254,6 @@ limitations under the License. Whether to enable automatic use of indexes - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/YARN/configuration/yarn-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/YARN/configuration/yarn-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/YARN/configuration/yarn-site.xml index 6ffe0ed..1125dfd 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/YARN/configuration/yarn-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.7/services/YARN/configuration/yarn-site.xml @@ -23,16 +23,14 @@ yarn.resourcemanager.resource-tracker.address localhost:8025 true - - + yarn.resourcemanager.scheduler.address localhost:8030 The address of the scheduler interface. true - - + yarn.resourcemanager.address @@ -41,21 +39,18 @@ The address of the applications manager interface in the RM. - - + yarn.resourcemanager.admin.address The address of the RM admin interface. - - + new-yarn-property some-value some description. - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml index 0e84951..c7b8aab 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HBASE/configuration/hbase-site.xml @@ -32,8 +32,7 @@ into /tmp. Change this configuration else all data will be lost on machine restart. - - + hbase.cluster.distributed @@ -43,8 +42,7 @@ false, startup will run all HBase and ZooKeeper daemons together in the one JVM. - - + hbase.tmp.dir @@ -54,30 +52,26 @@ than '/tmp' (The '/tmp' directory is often cleared on machine restart). - - + hbase.master.info.bindAddress The bind address for the HBase Master web UI - - + hbase.master.info.port The port for the HBase Master web UI. - - + hbase.regionserver.info.port The port for the HBase RegionServer web UI. - - + hbase.regionserver.global.memstore.upperLimit @@ -85,8 +79,7 @@ Maximum size of all memstores in a region server before new updates are blocked and flushes are forced. Defaults to 40% of heap - - + hbase.regionserver.handler.count @@ -95,8 +88,7 @@ Same property is used by the Master for count of master handlers. Default is 10. - - + hbase.hregion.majorcompaction @@ -105,8 +97,7 @@ HStoreFiles in a region. Default: 1 day. Set to 0 to disable automated major compactions. - - + hbase.regionserver.global.memstore.lowerLimit @@ -117,8 +108,7 @@ the minimum possible flushing to occur when updates are blocked due to memstore limiting. - - + hbase.hregion.memstore.block.multiplier @@ -130,8 +120,7 @@ resultant flush files take a long time to compact or split, or worse, we OOME - - + hbase.hregion.memstore.flush.size @@ -141,8 +130,7 @@ exceeds this number of bytes. Value is checked by a thread that runs every hbase.server.thread.wakefrequency. - - + hbase.hregion.memstore.mslab.enabled @@ -153,8 +141,7 @@ heavy write loads. This can reduce the frequency of stop-the-world GC pauses on large heaps. - - + hbase.hregion.max.filesize @@ -164,8 +151,7 @@ grown to exceed this value, the hosting HRegion is split in two. Default: 1G. - - + hbase.client.scanner.caching @@ -177,8 +163,7 @@ Do not set this value such that the time between invocations is greater than the scanner timeout; i.e. hbase.regionserver.lease.period - - + zookeeper.session.timeout @@ -190,8 +175,7 @@ "The client sends a requested timeout, the server responds with the timeout that it can give the client. " In milliseconds. - - + hbase.client.keyvalue.maxsize @@ -203,8 +187,7 @@ to set this to a fraction of the maximum region size. Setting it to zero or less disables the check. - - + hbase.hstore.compactionThreshold @@ -215,8 +198,7 @@ is run to rewrite all HStoreFiles files as one. Larger numbers put off compaction but when it runs, it takes longer to complete. - - + hbase.hstore.flush.retries.number @@ -224,8 +206,7 @@ The number of times the region flush operation will be retried. - - + hbase.hstore.blockingStoreFiles @@ -236,8 +217,7 @@ blocked for this HRegion until a compaction is completed, or until hbase.hstore.blockingWaitTime has been exceeded. - - + hfile.block.cache.size @@ -247,8 +227,7 @@ used by HFile/StoreFile. Default of 0.25 means allocate 25%. Set to 0 to disable but it's not recommended. - - + @@ -304,22 +279,19 @@ full privileges, regardless of stored ACLs, across the cluster. Only used when HBase security is enabled. - - + hbase.security.authentication simple - - + hbase.security.authorization false Enables HBase authorization. Set the value of this property to false to disable HBase authorization. - - + hbase.coprocessor.region.classes @@ -330,8 +302,7 @@ it in HBase's classpath and add the fully qualified class name here. A coprocessor can also be loaded on demand by setting HTableDescriptor. - - + hbase.coprocessor.master.classes @@ -343,8 +314,7 @@ implementing your own MasterObserver, just put it in HBase's classpath and add the fully qualified class name here. - - + hbase.zookeeper.property.clientPort @@ -352,8 +322,7 @@ Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. - - + @@ -383,8 +351,7 @@ and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495). - - + zookeeper.znode.parent @@ -394,21 +361,18 @@ By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed. - - + hbase.defaults.for.version.skip true Disables version verification. - - + dfs.domain.socket.path /var/lib/hadoop-hdfs/dn_socket Path to domain socket. - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HDFS/configuration/hdfs-site.xml index 78008b2..44af332 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HDFS/configuration/hdfs-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.8/services/HDFS/configuration/hdfs-site.xml @@ -23,8 +23,7 @@ Determines where on the local filesystem the DFS name node should store the name table. true - - + dfs.support.append @@ -32,14 +31,12 @@ to enable dfs append true false - - + dfs.webhdfs.enabled true to enable webhdfs - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/configuration/pig-properties.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/configuration/pig-properties.xml b/ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/configuration/pig-properties.xml index bbac28e..a5c9665 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/configuration/pig-properties.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.1.1/services/PIG/configuration/pig-properties.xml @@ -87,7 +87,6 @@ hcat.bin=/usr/bin/hcat content - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/a998371a/ambari-server/src/test/resources/stacks/OTHER/1.0/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/OTHER/1.0/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/test/resources/stacks/OTHER/1.0/services/HDFS/configuration/hdfs-site.xml index 78008b2..44af332 100644 --- a/ambari-server/src/test/resources/stacks/OTHER/1.0/services/HDFS/configuration/hdfs-site.xml +++ b/ambari-server/src/test/resources/stacks/OTHER/1.0/services/HDFS/configuration/hdfs-site.xml @@ -23,8 +23,7 @@ Determines where on the local filesystem the DFS name node should store the name table. true - - + dfs.support.append @@ -32,14 +31,12 @@ to enable dfs append true false - - + dfs.webhdfs.enabled true to enable webhdfs - - +