Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 67BB1200B42 for ; Thu, 2 Jun 2016 07:48:26 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 67210160A5D; Thu, 2 Jun 2016 05:48:26 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id BB7A1160A60 for ; Thu, 2 Jun 2016 07:48:23 +0200 (CEST) Received: (qmail 39936 invoked by uid 500); 2 Jun 2016 05:48:22 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 35739 invoked by uid 99); 2 Jun 2016 05:48:12 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 Jun 2016 05:48:12 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 1C546E0B2D; Thu, 2 Jun 2016 05:48:12 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: srimanth@apache.org To: commits@ambari.apache.org Date: Thu, 02 Jun 2016 05:49:06 -0000 Message-Id: <5ad478730053496dad7f8246d60243fb@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [57/98] [abbrv] ambari git commit: Revert "AMBARI-16272. Ambari Upgrade shouldn't automatically add stack configs (dlysnichenko)" - failing tests. archived-at: Thu, 02 Jun 2016 05:48:26 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/1.3.1/services/WEBHCAT/configuration/webhcat-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/1.3.1/services/WEBHCAT/configuration/webhcat-site.xml b/ambari-server/src/test/resources/stacks/HDP/1.3.1/services/WEBHCAT/configuration/webhcat-site.xml index e7539a1..31d0113 100644 --- a/ambari-server/src/test/resources/stacks/HDP/1.3.1/services/WEBHCAT/configuration/webhcat-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/1.3.1/services/WEBHCAT/configuration/webhcat-site.xml @@ -16,122 +16,111 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> + + + templeton.port - 50111 + 50111 The HTTP port for the main server. - - + templeton.hadoop.conf.dir /etc/hadoop/conf The path to the Hadoop configuration. - - + templeton.jar /usr/lib/hcatalog/share/webhcat/svr/webhcat.jar The path to the Templeton jar file. - - + templeton.libjars /usr/lib/zookeeper/zookeeper.jar Jars to add the the classpath. - - + + templeton.hadoop /usr/bin/hadoop The path to the Hadoop executable. - - + templeton.pig.archive hdfs:///apps/webhcat/pig.tar.gz The path to the Pig archive. - - + templeton.pig.path pig.tar.gz/pig/bin/pig The path to the Pig executable. - - + templeton.hcat /usr/bin/hcat The path to the hcatalog executable. - - + templeton.hive.archive hdfs:///apps/webhcat/hive.tar.gz The path to the Hive archive. - - + templeton.hive.path hive.tar.gz/hive/bin/hive The path to the Hive executable. - - + templeton.hive.properties - + Properties to set when running hive. - - + + templeton.zookeeper.hosts - + ZooKeeper servers, as comma separated host:port pairs - - + templeton.storage.class org.apache.hcatalog.templeton.tool.ZooKeeperStorage The class to use as storage - - + - templeton.override.enabled - false - + templeton.override.enabled + false + Enable the override path in templeton.override.jars - - - - + + + templeton.streaming.jar hdfs:///apps/webhcat/hadoop-streaming.jar The hdfs path to the Hadoop streaming jar file. - - - + + templeton.exec.timeout 60000 Time out for templeton api - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/1.3.4/services/HDFS/configuration/hdfs-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/1.3.4/services/HDFS/configuration/hdfs-log4j.xml b/ambari-server/src/test/resources/stacks/HDP/1.3.4/services/HDFS/configuration/hdfs-log4j.xml index de7e2af..ef06453 100644 --- a/ambari-server/src/test/resources/stacks/HDP/1.3.4/services/HDFS/configuration/hdfs-log4j.xml +++ b/ambari-server/src/test/resources/stacks/HDP/1.3.4/services/HDFS/configuration/hdfs-log4j.xml @@ -19,7 +19,9 @@ * limitations under the License. */ --> + + content @@ -188,10 +190,9 @@ log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - - content - - - + + content + + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-policy.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-policy.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-policy.xml index ecfbfe3..e45f23c 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-policy.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-policy.xml @@ -19,6 +19,7 @@ * limitations under the License. */ --> + security.client.protocol.acl @@ -28,9 +29,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.admin.protocol.acl * @@ -39,9 +39,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.masterregion.protocol.acl * @@ -50,7 +49,5 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-site.xml index 532e8b9..84d3eea 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HBASE/configuration/hbase-site.xml @@ -22,7 +22,7 @@ hbase.rootdir - + The directory shared by region servers and into which HBase persists. The URL should be 'fully-qualified' to include the filesystem scheme. For example, to specify the @@ -32,8 +32,6 @@ into /tmp. Change this configuration else all data will be lost on machine restart. - - hbase.cluster.distributed @@ -43,86 +41,69 @@ false, startup will run all HBase and ZooKeeper daemons together in the one JVM. - - hbase.tmp.dir - + Temporary directory on the local filesystem. Change this setting to point to a location more permanent than '/tmp' (The '/tmp' directory is often cleared on machine restart). - - hbase.master.info.bindAddress - + The bind address for the HBase Master web UI - - hbase.master.info.port - + The port for the HBase Master web UI. - - hbase.regionserver.info.port - + The port for the HBase RegionServer web UI. - - hbase.regionserver.global.memstore.upperLimit - + Maximum size of all memstores in a region server before new updates are blocked and flushes are forced. Defaults to 40% of heap - - hbase.regionserver.handler.count - + Count of RPC Listener instances spun up on RegionServers. Same property is used by the Master for count of master handlers. Default is 10. - - hbase.hregion.majorcompaction - + The time (in miliseconds) between 'major' compactions of all HStoreFiles in a region. Default: 1 day. Set to 0 to disable automated major compactions. - - + hbase.regionserver.global.memstore.lowerLimit - + When memstores are being forced to flush to make room in memory, keep flushing until we hit this mark. Defaults to 35% of heap. This value equal to hbase.regionserver.global.memstore.upperLimit causes the minimum possible flushing to occur when updates are blocked due to memstore limiting. - - hbase.hregion.memstore.block.multiplier - + Block updates if memstore has hbase.hregion.memstore.block.multiplier time hbase.hregion.flush.size bytes. Useful preventing runaway memstore during spikes in update traffic. Without an @@ -130,46 +111,38 @@ resultant flush files take a long time to compact or split, or worse, we OOME - - hbase.hregion.memstore.flush.size - + Memstore will be flushed to disk if size of the memstore exceeds this number of bytes. Value is checked by a thread that runs every hbase.server.thread.wakefrequency. - - hbase.hregion.memstore.mslab.enabled - + Enables the MemStore-Local Allocation Buffer, a feature which works to prevent heap fragmentation under heavy write loads. This can reduce the frequency of stop-the-world GC pauses on large heaps. - - hbase.hregion.max.filesize - + Maximum HStoreFile size. If any one of a column families' HStoreFiles has grown to exceed this value, the hosting HRegion is split in two. Default: 1G. - - hbase.client.scanner.caching - + Number of rows that will be fetched when calling next on a scanner if it is not served from (local, client) memory. Higher caching values will enable faster scanners but will eat up more memory @@ -177,8 +150,6 @@ Do not set this value such that the time between invocations is greater than the scanner timeout; i.e. hbase.regionserver.lease.period - - zookeeper.session.timeout @@ -190,12 +161,10 @@ "The client sends a requested timeout, the server responds with the timeout that it can give the client. " In milliseconds. - - hbase.client.keyvalue.maxsize - + Specifies the combined maximum allowed size of a KeyValue instance. This is to set an upper boundary for a single entry saved in a storage file. Since they cannot be split it helps avoiding that a region @@ -203,80 +172,67 @@ to set this to a fraction of the maximum region size. Setting it to zero or less disables the check. - - hbase.hstore.compactionThreshold - + If more than this number of HStoreFiles in any one HStore (one HStoreFile is written per flush of memstore) then a compaction is run to rewrite all HStoreFiles files as one. Larger numbers put off compaction but when it runs, it takes longer to complete. - - hbase.hstore.blockingStoreFiles - + If more than this number of StoreFiles in any one Store (one StoreFile is written per flush of MemStore) then updates are blocked for this HRegion until a compaction is completed, or until hbase.hstore.blockingWaitTime has been exceeded. - - hfile.block.cache.size - + Percentage of maximum heap (-Xmx setting) to allocate to block cache used by HFile/StoreFile. Default of 0.25 means allocate 25%. Set to 0 to disable but it's not recommended. - - + hbase.master.keytab.file - + Full path to the kerberos keytab file to use for logging in the configured HMaster server principal. - - hbase.master.kerberos.principal - + Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the HMaster process. The principal name should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the hostname portion, it will be replaced with the actual hostname of the running instance. - - hbase.regionserver.keytab.file - + Full path to the kerberos keytab file to use for logging in the configured HRegionServer server principal. - - hbase.regionserver.kerberos.principal - + Ex. "hbase/_HOST@EXAMPLE.COM". The kerberos principal name that should be used to run the HRegionServer process. The principal name should be in the form: user/hostname@DOMAIN. If "_HOST" is used as the @@ -284,9 +240,8 @@ running instance. An entry for this principal must exist in the file specified in hbase.regionserver.keytab.file - - + hbase.superuser @@ -295,24 +250,22 @@ full privileges, regardless of stored ACLs, across the cluster. Only used when HBase security is enabled. - - + hbase.coprocessor.region.classes - + A comma-separated list of Coprocessors that are loaded by default on all tables. For any override coprocessor method, these classes will be called in order. After implementing your own Coprocessor, just put it in HBase's classpath and add the fully qualified class name here. A coprocessor can also be loaded on demand by setting HTableDescriptor. - - + hbase.coprocessor.master.classes - + A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are loaded by default on the active HMaster process. For any implemented @@ -320,25 +273,23 @@ implementing your own MasterObserver, just put it in HBase's classpath and add the fully qualified class name here. - - + hbase.zookeeper.property.clientPort 2181 Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. - - + hbase.zookeeper.quorum - + Comma separated list of servers in the ZooKeeper Quorum. For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com". By default this is set to localhost for local and pseudo-distributed modes @@ -346,60 +297,54 @@ list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on. - - + dfs.support.append - + Does HDFS allow appends to files? This is an hdfs config. set in here so the hdfs client will do append support. You must ensure that this config. is true serverside too when running hbase (You will have to restart your cluster after setting it). - - + dfs.client.read.shortcircuit - + Enable/Disable short circuit read for your client. Hadoop servers should be configured to allow short circuit read for the hbase user for this to take effect - - + dfs.client.read.shortcircuit.skip.checksum - + Enable/disbale skipping the checksum check - - + hbase.zookeeper.useMulti true Instructs HBase to make use of ZooKeeper's multi-update functionality. This allows certain ZooKeeper operations to complete more quickly and prevents some issues - with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).· + with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).ยท IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+ and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495). - - zookeeper.znode.parent - + Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper files that are configured with a relative path will go under this node. By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed. - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/core-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/core-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/core-site.xml index f20de8e..b5b2a67 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/core-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/core-site.xml @@ -1,6 +1,7 @@ - + + - + + + io.file.buffer.size 131072 @@ -26,61 +31,55 @@ The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. - - + io.serializations org.apache.hadoop.io.serializer.WritableSerialization - - + io.compression.codecs org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec A list of the compression codec classes that can be used for compression/decompression. - - + io.compression.codec.lzo.class com.hadoop.compression.lzo.LzoCodec The implementation for lzo codec. - - - + + + fs.default.name - + The name of the default file system. Either the literal string "local" or a host:port for HDFS. true - - + fs.trash.interval 360 Number of minutes between trash checkpoints. If zero, the trash feature is disabled. - - + fs.checkpoint.dir - + Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. - - + fs.checkpoint.edits.dir ${fs.checkpoint.dir} @@ -90,26 +89,23 @@ replicated in all of the directoires for redundancy. Default value is same as fs.checkpoint.dir - - + fs.checkpoint.period 21600 The number of seconds between two periodic checkpoints. - - + fs.checkpoint.size 536870912 The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired. - - + ipc.client.idlethreshold @@ -117,25 +113,22 @@ Defines the threshold number of connections after which connections will be inspected for idleness. - - + ipc.client.connection.maxidletime 30000 The maximum time after which a client will bring down the connection to the server. - - + ipc.client.connect.max.retries 50 Defines the maximum number of retries for IPC connections. - - + webinterface.private.actions @@ -145,28 +138,24 @@ not be exposed to public. Enable this option if the interfaces are only reachable by those who have the right authorization. - - - - hadoop.security.authentication - simple - + + + hadoop.security.authentication + simple + Set the authentication for the cluster. Valid values are: simple or kerberos. - - - - - hadoop.security.authorization - false - + + + hadoop.security.authorization + false + Enable authorization for different protocols. - - - + + hadoop.security.auth_to_local @@ -176,7 +165,7 @@ RULE:[2:$1@$0](rs@.*)s/.*/hbase/ DEFAULT - The mapping from kerberos principal names to local OS user names. +The mapping from kerberos principal names to local OS user names. So the default rule is just "DEFAULT" which takes all principals in your default domain to their first component. "omalley@APACHE.ORG" and "omalley/admin@APACHE.ORG" to "omalley", if your default domain is APACHE.ORG. The translations rules have 3 sections: @@ -214,10 +203,9 @@ If you want to treat all principals from APACHE.ORG with /admin as "admin", your RULE[2:$1%$2@$0](.%admin@APACHE.ORG)s/./admin/ DEFAULT - - - + namenode_host - + NameNode Host. - - dfs_name_dir /hadoop/hdfs/namenode NameNode Directories. - - snamenode_host - + Secondary NameNode. - - fs_checkpoint_dir /hadoop/hdfs/namesecondary Secondary NameNode checkpoint dir. - - datanode_hosts - + List of Datanode Hosts. - - dfs_data_dir /hadoop/hdfs/data Data directories for Data Nodes. - - hdfs_log_dir_prefix /var/log/hadoop Hadoop Log Dir Prefix - - hadoop_pid_dir_prefix /var/run/hadoop Hadoop PID Dir Prefix - - dfs_webhdfs_enabled true WebHDFS enabled - - hadoop_heapsize 1024 Hadoop maximum Java heap size - - namenode_heapsize 1024 NameNode Java heap size - - namenode_opt_newsize 200 Default size of Java new generation for NameNode (Java option -XX:NewSize) Note: The value of NameNode new generation size (default size of Java new generation for NameNode (Java option -XX:NewSize)) should be 1/8 of maximum heap size (-Xmx). Ensure that the value of the namenode_opt_newsize property is 1/8 the value of maximum heap size (-Xmx). - - namenode_opt_maxnewsize 640 NameNode maximum new generation size - - namenode_opt_permsize 128 NameNode permanent generation size - - namenode_opt_maxpermsize 256 NameNode maximum permanent generation size - - datanode_du_reserved 1 Reserved space for HDFS - - dtnode_heapsize 1024 DataNode maximum Java heap size - - dfs_datanode_failed_volume_tolerated 0 DataNode volumes failure toleration - - fs_checkpoint_period 21600 HDFS Maximum Checkpoint Delay - - fs_checkpoint_size 0.5 FS Checkpoint Size. - - proxyuser_group users Proxy user group. - - dfs_exclude - + HDFS Exclude hosts. - - dfs_include - + HDFS Include hosts. - - dfs_replication 3 Default Block Replication. - - dfs_block_local_path_access_user hbase Default Block Replication. - - dfs_datanode_address 50010 Port for datanode address. - - dfs_datanode_http_address 50075 Port for datanode address. - - dfs_datanode_data_dir_perm 750 Datanode dir perms. - - + security_enabled false Hadoop Security - - kerberos_domain EXAMPLE.COM Kerberos realm. - - kadmin_pw - + Kerberos realm admin password - - keytab_path /etc/security/keytabs Kerberos keytab path. - - + keytab_path /etc/security/keytabs KeyTab Directory. - - - + namenode_formatted_mark_dir /var/run/hadoop/hdfs/namenode/formatted/ Formatteed Mark Directory. - - - + hdfs_user hdfs User and Groups. - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hadoop-policy.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hadoop-policy.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hadoop-policy.xml index 3ba9087..6ec304d 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hadoop-policy.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hadoop-policy.xml @@ -1,5 +1,6 @@ + + + security.client.protocol.acl @@ -26,9 +29,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.client.datanode.protocol.acl * @@ -37,9 +39,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.datanode.protocol.acl * @@ -48,9 +49,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.inter.datanode.protocol.acl * @@ -59,9 +59,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.namenode.protocol.acl * @@ -70,9 +69,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.inter.tracker.protocol.acl * @@ -81,9 +79,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.job.submission.protocol.acl * @@ -92,9 +89,8 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.task.umbilical.protocol.acl * @@ -103,19 +99,17 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - - + + security.admin.operations.protocol.acl hadoop ACL for AdminOperationsProtocol. Used for admin commands. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + security.refresh.usertogroups.mappings.protocol.acl hadoop @@ -124,10 +118,9 @@ group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - - + + security.refresh.policy.protocol.acl hadoop ACL for RefreshAuthorizationPolicyProtocol, used by the @@ -135,7 +128,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hdfs-site.xml index e1f1461..e277def 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hdfs-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HDFS/configuration/hdfs-site.xml @@ -1,5 +1,6 @@ + + + - + + + dfs.name.dir - + Determines where on the local filesystem the DFS name node should store the name table. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. true - - + dfs.support.append true to enable dfs append true - - + dfs.webhdfs.enabled true to enable webhdfs true - - - + dfs.datanode.failed.volumes.tolerated 0 #of failed disks dn would tolerate true - - + dfs.block.local-path-access.user hbase @@ -69,12 +71,11 @@ circuit reads. true - - + dfs.data.dir - + Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named @@ -82,20 +83,18 @@ Directories that do not exist are ignored. true - - + dfs.hosts.exclude - + Names a file that contains a list of hosts that are not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded. - - - + dfs.checksum.type CRC32 @@ -112,39 +112,34 @@ compatibility, it is being set to CRC32. Once all migration steps are complete, we can change it to CRC32C and take advantage of the additional performance benefit. - - + dfs.replication.max 50 Maximal block replication. - - + dfs.replication 3 Default block replication. - - + dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds. - - + dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds. - - + dfs.safemode.threshold.pct 1.0f @@ -154,9 +149,8 @@ Values less than or equal to 0 mean not to start in safe mode. Values greater than 1 will make safe mode permanent. - - + dfs.balance.bandwidthPerSec 6250000 @@ -165,271 +159,244 @@ can utilize for the balancing purpose in term of the number of bytes per second. - - + dfs.datanode.address 0.0.0.0:50010 - - + dfs.datanode.http.address 0.0.0.0:50075 - - + dfs.block.size 134217728 The default block size for new files. - - + dfs.http.address - - The name of the default file system. Either the + +The name of the default file system. Either the literal string "local" or a host:port for HDFS. - true - - - - - dfs.datanode.du.reserved - - 1073741824 - Reserved space in bytes per volume. Always leave this much space free for non dfs use. +true + + + +dfs.datanode.du.reserved + +1073741824 +Reserved space in bytes per volume. Always leave this much space free for non dfs use. - - - - - dfs.datanode.ipc.address - 0.0.0.0:8010 - + + + +dfs.datanode.ipc.address +0.0.0.0:8010 + The datanode ipc server address and port. If the port is 0 then the server will start on a free port. - - - - - dfs.blockreport.initialDelay - 120 - Delay for first block report in seconds. - - - - - dfs.namenode.handler.count - 40 - The number of server threads for the namenode. - - - - - dfs.datanode.max.xcievers - 1024 - PRIVATE CONFIG VARIABLE - - - - - - dfs.umaskmode - 022 - + + + +dfs.blockreport.initialDelay +120 +Delay for first block report in seconds. + + + +dfs.namenode.handler.count +40 +The number of server threads for the namenode. + + + +dfs.datanode.max.xcievers +1024 +PRIVATE CONFIG VARIABLE + + + + + +dfs.umaskmode +022 + The octal umask used when creating files and directories. - - - - - dfs.web.ugi - - gopher,gopher - The user account used by the web interface. + + + +dfs.web.ugi + +gopher,gopher +The user account used by the web interface. Syntax: USERNAME,GROUP1,GROUP2, ... - - - - - dfs.permissions - true - + + + +dfs.permissions +true + If "true", enable permission checking in HDFS. If "false", permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. - - - - - dfs.permissions.supergroup - hdfs - The name of the group of super-users. - - - - - dfs.namenode.handler.count - 100 - Added to grow Queue size so that more client connections are allowed - - - - - ipc.server.max.response.size - 5242880 - - - - - dfs.block.access.token.enable - true - + + + +dfs.permissions.supergroup +hdfs +The name of the group of super-users. + + + +dfs.namenode.handler.count +100 +Added to grow Queue size so that more client connections are allowed + + + +ipc.server.max.response.size +5242880 + + +dfs.block.access.token.enable +true + If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. - - - - - dfs.namenode.kerberos.principal - - + + + +dfs.namenode.kerberos.principal + + Kerberos principal name for the NameNode - - - - - dfs.secondary.namenode.kerberos.principal - + + + +dfs.secondary.namenode.kerberos.principal + Kerberos principal name for the secondary NameNode. - - - dfs.namenode.kerberos.https.principal - - The Kerberos principal for the host that the NameNode runs on. - - + + The Kerberos principal for the host that the NameNode runs on. + + dfs.secondary.namenode.kerberos.https.principal - + The Kerberos principal for the hostthat the secondary NameNode runs on. - - + + dfs.secondary.http.address - + Address of secondary namenode web server - - + dfs.secondary.https.port 50490 The https port where secondary-namenode binds - - + dfs.web.authentication.kerberos.principal - + The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos HTTP SPENGO specification. - - + dfs.web.authentication.kerberos.keytab - + The Kerberos keytab file with the credentials for the HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. - - + dfs.datanode.kerberos.principal - - + + The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. - - + dfs.namenode.keytab.file - - + + Combined keytab file containing the namenode service and host principals. - - + dfs.secondary.namenode.keytab.file - - + + Combined keytab file containing the namenode service and host principals. - - + dfs.datanode.keytab.file - - + + The filename of the keytab file for the DataNode. - - + dfs.https.port 50470 - The https port where namenode binds - - + The https port where namenode binds + + dfs.https.address - - The https address where namenode binds - - + + The https address where namenode binds + + dfs.datanode.data.dir.perm 750 - The permissions that should be there on dfs.data.dir +The permissions that should be there on dfs.data.dir directories. The datanode will not come up if the permissions are different on existing dfs.data.dir directories. If the directories don't exist, they will be created with this permission. - - + dfs.access.time.precision 0 @@ -437,23 +404,20 @@ don't exist, they will be created with this permission. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. - - + - dfs.cluster.administrators - hdfs - ACL for who all can view the default servlets in the HDFS - - + dfs.cluster.administrators + hdfs + ACL for who all can view the default servlets in the HDFS + ipc.server.read.threadpool.size 5 - - - + + dfs.namenode.check.stale.datanode true @@ -462,7 +426,6 @@ don't exist, they will be created with this permission. for more than 30s (i.e. in a stale state) are used for reads only if all other remote replicas have failed. - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HIVE/configuration/hive-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HIVE/configuration/hive-site.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HIVE/configuration/hive-site.xml index 8d8d6cd..786c9ce 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HIVE/configuration/hive-site.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/HIVE/configuration/hive-site.xml @@ -16,138 +16,121 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> + hive.metastore.local false controls whether to connect to remove metastore server or open a new metastore server in Hive Client JVM - - + javax.jdo.option.ConnectionURL - + JDBC connect string for a JDBC metastore - - + javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore - - + javax.jdo.option.ConnectionUserName - + username to use against metastore database - - + javax.jdo.option.ConnectionPassword - + PASSWORD password to use against metastore database password - - + hive.metastore.warehouse.dir /apps/hive/warehouse location of default database for the warehouse - - + hive.metastore.sasl.enabled - + If true, the metastore thrift interface will be secured with SASL. Clients must authenticate with Kerberos. - - + hive.metastore.kerberos.keytab.file - + The path to the Kerberos Keytab file containing the metastore thrift server's service principal. - - + hive.metastore.kerberos.principal - + The service principal for the metastore thrift server. The special string _HOST will be replaced automatically with the correct host name. - - + hive.metastore.cache.pinobjtypes Table,Database,Type,FieldSchema,Order List of comma separated metastore object types that should be pinned in the cache - - + hive.metastore.uris - + URI for client to contact metastore server - - + hadoop.clientside.fs.operations true FS operations are owned by client - - + hive.metastore.client.socket.timeout 60 MetaStore Client socket timeout in seconds - - + hive.metastore.execute.setugi true In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using the client's reported user and group permissions. Note that this property must be set on both the client and server sides. Further note that its best effort. If client sets its to true and server sets it to false, client setting will be ignored. - - + hive.security.authorization.enabled true enable or disable the hive client authorization - - + hive.security.authorization.manager org.apache.hcatalog.security.HdfsAuthorizationProvider the hive client authorization manager class name. The user defined authorization class should implement interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider. - - + hive.server2.enable.doAs true - - + fs.hdfs.impl.disable.cache true - - + http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/global.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/global.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/global.xml index a148c52..ceedd56 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/global.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/global.xml @@ -19,33 +19,26 @@ * limitations under the License. */ --> + hs_host - + History Server. - - mapred_log_dir_prefix /var/log/hadoop-mapreduce Mapreduce Log Dir Prefix - - mapred_pid_dir_prefix /var/run/hadoop-mapreduce Mapreduce PID Dir Prefix - - mapred_user mapred Mapreduce User - - http://git-wip-us.apache.org/repos/asf/ambari/blob/3148759f/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/mapred-queue-acls.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/mapred-queue-acls.xml b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/mapred-queue-acls.xml index 2b6307e..ce12380 100644 --- a/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/mapred-queue-acls.xml +++ b/ambari-server/src/test/resources/stacks/HDP/2.0.1/services/MAPREDUCE2/configuration/mapred-queue-acls.xml @@ -1,5 +1,6 @@ + + - + + + + mapred.queue.default.acl-submit-job * - - + mapred.queue.default.acl-administer-jobs * - - + +