Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id C5203200B0F for ; Thu, 2 Jun 2016 17:02:39 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id C3C99160A53; Thu, 2 Jun 2016 15:02:39 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E2A0B160A5C for ; Thu, 2 Jun 2016 17:02:36 +0200 (CEST) Received: (qmail 64000 invoked by uid 500); 2 Jun 2016 15:02:35 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 61234 invoked by uid 99); 2 Jun 2016 15:02:32 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 02 Jun 2016 15:02:32 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 40120E0105; Thu, 2 Jun 2016 15:02:32 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: dmitriusan@apache.org To: commits@ambari.apache.org Date: Thu, 02 Jun 2016 15:03:14 -0000 Message-Id: In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [44/47] ambari git commit: AMBARI-16272. Ambari Upgrade shouldn't automatically add stack configs (dlysnichenko) archived-at: Thu, 02 Jun 2016 15:02:39 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-site.xml b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-site.xml index 3575bf2..c20d51d 100644 --- a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-site.xml +++ b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/configuration/hbase-site.xml @@ -33,6 +33,8 @@ into /tmp. Change this configuration else all data will be lost on machine restart. + + hbase.cluster.distributed @@ -42,6 +44,8 @@ false, startup will run all HBase and ZooKeeper daemons together in the one JVM. + + hbase.master.port @@ -52,6 +56,8 @@ int false + + hbase.tmp.dir @@ -65,28 +71,38 @@ directory + + hbase.local.dir ${hbase.tmp.dir}/local Directory on the local filesystem to be used as a local storage + + hbase.master.info.bindAddress 0.0.0.0 The bind address for the HBase Master web UI + + hbase.master.info.port 60010 The port for the HBase Master web UI. + + hbase.regionserver.info.port 60030 The port for the HBase RegionServer web UI. + + hbase.regionserver.global.memstore.upperLimit @@ -105,6 +121,8 @@ 0.8 0.01 + + hbase.regionserver.handler.count @@ -120,6 +138,8 @@ 240 1 + + hbase.hregion.majorcompaction @@ -134,8 +154,9 @@ 2592000000 milliseconds + + - hbase.regionserver.global.memstore.lowerLimit 0.38 @@ -148,11 +169,12 @@ float + + hbase.hregion.memstore.block.multiplier 4 - HBase Region Block Multiplier Block updates if a memstore's size spikes this many times above the size that would cause it to be flushed. Useful to prevent runaway memstores during a sudden spike in update traffic. @@ -173,6 +195,8 @@ 1 + + hbase.hregion.memstore.flush.size @@ -188,6 +212,8 @@ 1048576 B + + hbase.hregion.memstore.mslab.enabled @@ -201,6 +227,8 @@ boolean + + hbase.hregion.max.filesize @@ -217,6 +245,8 @@ B 1073741824 + + hbase.client.scanner.caching @@ -236,6 +266,8 @@ 100 rows + + zookeeper.session.timeout @@ -259,6 +291,8 @@ milliseconds 10000 + + hbase.client.keyvalue.maxsize @@ -279,6 +313,8 @@ B 262144 + + hbase.hstore.compactionThreshold @@ -305,6 +341,8 @@ + + hbase.hstore.flush.retries.number @@ -312,8 +350,9 @@ The number of times the region flush operation will be retried. + + - hbase.hstore.blockingStoreFiles hstore blocking storefiles @@ -327,6 +366,8 @@ int + + hfile.block.cache.size @@ -339,8 +380,9 @@ 0.8 0.01 + + - hbase.superuser @@ -355,8 +397,9 @@ hbase_user + + - hbase.security.authentication simple @@ -378,8 +421,9 @@ 1 + + - hbase.security.authorization false @@ -399,8 +443,9 @@ 1 + + - hbase.coprocessor.region.classes org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint @@ -423,11 +468,12 @@ hbase.security.authentication + + - hbase.coprocessor.master.classes - + A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are loaded by default on the active HMaster process. For any implemented @@ -444,16 +490,18 @@ hbase.security.authorization + + - hbase.zookeeper.property.clientPort 2181 Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect. + + - - hbase.zookeeper.useMulti true Instructs HBase to make use of ZooKeeper's multi-update functionality. This allows certain ZooKeeper operations to complete more quickly and prevents some issues - with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).ยท + with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).· IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+ and will not be downgraded. ZooKeeper versions before 3.4 do not support multi-update and will not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495). + + zookeeper.znode.parent @@ -493,6 +544,8 @@ By default, all of HBase's ZooKeeper file path are configured with a relative path, so they will all go under this directory unless changed. + + hbase.client.retries.number @@ -510,6 +563,8 @@ 50 1 + + hbase.rpc.timeout @@ -527,11 +582,15 @@ milliseconds 10000 + + hbase.defaults.for.version.skip true Disables version verification. + + phoenix.query.timeoutMs @@ -545,17 +604,20 @@ milliseconds 10000 + + - dfs.domain.socket.path /var/lib/hadoop-hdfs/dn_socket Path to domain socket. + + - hbase.rpc.protection authentication + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/core-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/core-site.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/core-site.xml index d216605..a2cb615 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/core-site.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/core-site.xml @@ -1,7 +1,6 @@ - - - - - - + ha.failover-controller.active-standby-elector.zk.op.retries 120 ZooKeeper Failover Controller retries setting for your environment + + - - - + io.file.buffer.size 131072 @@ -37,24 +33,26 @@ The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. + + - io.serializations org.apache.hadoop.io.serializer.WritableSerialization A list of comma-delimited serialization classes that can be used for obtaining serializers and deserializers. + + - io.compression.codecs org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec A list of the compression codec classes that can be used for compression/decompression. + + - - - + fs.defaultFS @@ -63,8 +61,9 @@ The name of the default file system. Either the literal string "local" or a host:port for HDFS. true + + - fs.trash.interval 360 @@ -74,8 +73,9 @@ If trash is disabled server side then the client side configuration is checked. If trash is enabled on the server side then the value configured on the server is used and the client configuration value is ignored. + + - ipc.client.idlethreshold @@ -83,22 +83,25 @@ Defines the threshold number of connections after which connections will be inspected for idleness. + + - ipc.client.connection.maxidletime 30000 The maximum time after which a client will bring down the connection to the server. + + - ipc.client.connect.max.retries 50 Defines the maximum number of retries for IPC connections. + + - ipc.server.tcpnodelay true @@ -108,8 +111,9 @@ decrease latency with a cost of more/smaller packets. + + - mapreduce.jobtracker.webinterface.trusted @@ -119,28 +123,32 @@ not be exposed to public. Enable this option if the interfaces are only reachable by those who have the right authorization. + + - - - hadoop.security.authentication - simple - + + hadoop.security.authentication + simple + Set the authentication for the cluster. Valid values are: simple or kerberos. - - - hadoop.security.authorization - false - + + + + + hadoop.security.authorization + false + Enable authorization for different protocols. - - + + + hadoop.security.auth_to_local DEFAULT -The mapping from kerberos principal names to local OS mapreduce.job.user.names. + The mapping from kerberos principal names to local OS mapreduce.job.user.names. So the default rule is just "DEFAULT" which takes all principals in your default domain to their first component. "omalley@APACHE.ORG" and "omalley/admin@APACHE.ORG" to "omalley", if your default domain is APACHE.ORG. The translations rules have 3 sections: @@ -181,6 +189,8 @@ DEFAULT multiLine + + net.topology.script.file.name @@ -188,5 +198,7 @@ DEFAULT Location of topology script used by Hadoop to determine the rack location of nodes. + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml index 77f9dcf..0501957 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml @@ -19,7 +19,6 @@ * limitations under the License. */ --> - hdfs_log_dir_prefix @@ -30,6 +29,8 @@ directory false + + hadoop_pid_dir_prefix @@ -41,6 +42,8 @@ false true + + hadoop_root_logger @@ -50,6 +53,8 @@ false + + hadoop_heapsize @@ -61,6 +66,8 @@ MB false + + namenode_heapsize @@ -81,6 +88,8 @@ dfs.datanode.data.dir + + namenode_opt_newsize @@ -101,6 +110,8 @@ 256 false + + namenode_opt_maxnewsize @@ -121,6 +132,8 @@ 256 false + + namenode_opt_permsize @@ -135,6 +148,8 @@ 128 false + + namenode_opt_maxpermsize @@ -149,6 +164,8 @@ 128 false + + dtnode_heapsize @@ -162,6 +179,8 @@ MB 128 + + proxyuser_group @@ -173,6 +192,8 @@ user false + + hdfs_user @@ -184,6 +205,8 @@ user false + + hdfs_tmp_dir @@ -196,29 +219,35 @@ false false + + hdfs_user_nofile_limit 128000 Max open files limit setting for HDFS user. + + - hdfs_user_nproc_limit 65536 Max number of processes limit setting for HDFS user. + + - hdfs_user_keytab HDFS keytab path + + - hdfs_principal_name HDFS principal name + + - content @@ -329,7 +358,7 @@ export HADOOP_IDENT_STRING=$USER # Add database libraries JAVA_JDBC_LIBS="" if [ -d "/usr/share/java" ]; then - for jarFile in `ls /usr/share/java | grep -E "(mysql|ojdbc|postgresql|sqljdbc)" 2>/dev/null` + for jarFile in `ls /usr/share/java | grep -E "(mysql|ojdbc|postgresql|sqljdbc)" 2>/dev/null` do JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile done @@ -354,7 +383,7 @@ export JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:/usr/lib/hadoop/lib/native/Linux-a {% if is_datanode_max_locked_memory_set %} # Fix temporary bug, when ulimit from conf files is not picked up, without full relogin. # Makes sense to fix only when runing DN as root -if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then +if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then ulimit -l {{datanode_max_locked_memory}} fi {% endif %} @@ -362,6 +391,7 @@ fi content + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-policy.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-policy.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-policy.xml index 41bde16..c147171 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-policy.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-policy.xml @@ -1,6 +1,5 @@ - - - security.client.protocol.acl @@ -29,8 +26,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.client.datanode.protocol.acl * @@ -39,8 +37,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.datanode.protocol.acl * @@ -49,8 +48,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.inter.datanode.protocol.acl * @@ -59,8 +59,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.namenode.protocol.acl * @@ -69,8 +70,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.inter.tracker.protocol.acl * @@ -79,8 +81,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.job.client.protocol.acl * @@ -89,8 +92,9 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.job.task.protocol.acl * @@ -99,17 +103,19 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - - + security.admin.operations.protocol.acl hadoop ACL for AdminOperationsProtocol. Used for admin commands. The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - security.refresh.usertogroups.mappings.protocol.acl hadoop @@ -118,9 +124,10 @@ group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - - + security.refresh.policy.protocol.acl hadoop ACL for RefreshAuthorizationPolicyProtocol, used by the @@ -128,7 +135,7 @@ The ACL is a comma-separated list of user and group names. The user and group list is separated by a blank. For e.g. "alice,bob users,wheel". A special value of "*" means all users are allowed. + + - - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-log4j.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-log4j.xml index 8bbb2c9..e154d58 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-log4j.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-log4j.xml @@ -19,9 +19,7 @@ * limitations under the License. */ --> - - content hdfs-log4j template @@ -201,6 +199,7 @@ log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN content false + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml index 260fe65..dbe5d96 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml @@ -1,6 +1,5 @@ - - - - - dfs.namenode.name.dir @@ -38,15 +33,17 @@ directories false + + - dfs.support.append true to enable dfs append true + + - dfs.webhdfs.enabled true @@ -57,8 +54,9 @@ boolean false + + - dfs.datanode.failed.volumes.tolerated 0 @@ -77,8 +75,9 @@ dfs.datanode.data.dir + + - dfs.datanode.data.dir /hadoop/hdfs/data @@ -93,8 +92,9 @@ directories + + - dfs.hosts.exclude /etc/hadoop/conf/dfs.exclude @@ -102,8 +102,9 @@ not permitted to connect to the namenode. The full pathname of the file must be specified. If the value is empty, no hosts are excluded. + + - - dfs.namenode.checkpoint.dir /hadoop/hdfs/namesecondary @@ -128,8 +128,9 @@ directories false + + - dfs.namenode.checkpoint.edits.dir ${dfs.namenode.checkpoint.dir} @@ -139,9 +140,9 @@ replicated in all of the directories for redundancy. Default value is same as dfs.namenode.checkpoint.dir + + - - dfs.namenode.checkpoint.period 21600 @@ -151,8 +152,9 @@ int seconds + + - dfs.namenode.checkpoint.txns 1000000 @@ -160,15 +162,17 @@ of the namespace every 'dfs.namenode.checkpoint.txns' transactions, regardless of whether 'dfs.namenode.checkpoint.period' has expired. + + - dfs.replication.max 50 Maximal block replication. + + - dfs.replication 3 @@ -178,14 +182,16 @@ int + + - dfs.heartbeat.interval 3 Determines datanode heartbeat interval in seconds. + + - dfs.namenode.safemode.threshold-pct 0.999 @@ -202,8 +208,9 @@ 1.000 0.001 + + - dfs.datanode.balance.bandwidthPerSec 6250000 @@ -212,46 +219,52 @@ can utilize for the balancing purpose in term of the number of bytes per second. + + - dfs.https.port 50470 This property is used by HftpFileSystem. + + - dfs.datanode.address 0.0.0.0:50010 The datanode server address and port for data transfer. + + - dfs.datanode.http.address 0.0.0.0:50075 The datanode http server address and port. + + - dfs.datanode.https.address 0.0.0.0:50475 The datanode https server address and port. + + - dfs.blocksize 134217728 The default block size for new files. + + - dfs.namenode.http-address localhost:50070 @@ -259,15 +272,17 @@ The name of the default file system. Either the literal string "local" or a host:port for HDFS. true + + - dfs.namenode.rpc-address localhost:8020 DONT_ADD_ON_UPGRADE RPC address that handles all clients requests. + + - dfs.datanode.du.reserved @@ -285,8 +300,9 @@ dfs.datanode.data.dir + + - dfs.datanode.ipc.address 0.0.0.0:8010 @@ -294,14 +310,16 @@ The datanode ipc server address and port. If the port is 0 then the server will start on a free port. + + - dfs.blockreport.initialDelay 120 Delay for first block report in seconds. + + - dfs.datanode.max.transfer.threads 1024 @@ -312,18 +330,19 @@ 0 48000 + + - - fs.permissions.umask-mode 022 The octal umask used when creating files and directories. + + - dfs.permissions.enabled true @@ -334,14 +353,16 @@ Switching from one parameter value to the other does not change the mode, owner or group of files or directories. + + - dfs.permissions.superusergroup hdfs The name of the group of super-users. + + - dfs.namenode.handler.count 100 @@ -352,8 +373,9 @@ 1 200 + + - dfs.block.access.token.enable true @@ -361,25 +383,26 @@ If "true", access tokens are used as capabilities for accessing datanodes. If "false", no access tokens are checked on accessing datanodes. + + - dfs.namenode.secondary.http-address localhost:50090 DONT_ADD_ON_UPGRADE Address of secondary namenode web server + + - - dfs.namenode.https-address localhost:50470 DONT_ADD_ON_UPGRADE The https address where namenode binds - + + - dfs.datanode.data.dir.perm 750 @@ -391,8 +414,9 @@ int + + - dfs.namenode.accesstime.precision 0 @@ -404,14 +428,16 @@ int + + - dfs.cluster.administrators hdfs ACL for who all can view the default servlets in the HDFS + + - dfs.namenode.avoid.read.stale.datanode true @@ -420,6 +446,8 @@ heartbeat messages have not been received by the namenode for more than a specified time interval. + + dfs.namenode.avoid.write.stale.datanode @@ -429,6 +457,8 @@ heartbeat messages have not been received by the namenode for more than a specified time interval. + + dfs.namenode.write.stale.datanode.ratio @@ -436,35 +466,40 @@ When the ratio of number stale datanodes to total datanodes marked is greater than this ratio, stop avoiding writing to stale nodes so as to prevent causing hotspots. + + dfs.namenode.stale.datanode.interval 30000 Datanode is stale after not getting a heartbeat in this interval in ms + + - dfs.journalnode.http-address 0.0.0.0:8480 The address and port the JournalNode web UI listens on. If the port is 0 then the server will start on a free port. + + - dfs.journalnode.https-address 0.0.0.0:8481 The address and port the JournalNode HTTPS server listens on. If the port is 0 then the server will start on a free port. + + - dfs.journalnode.edits.dir /grid/0/hdfs/journal The path where the JournalNode daemon will store its local state. + + - - dfs.client.read.shortcircuit true @@ -475,8 +510,9 @@ boolean + + - dfs.domain.socket.path /var/lib/hadoop-hdfs/dn_socket @@ -484,8 +520,9 @@ This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode. + + - dfs.client.read.shortcircuit.streams.cache.size 4096 @@ -495,15 +532,17 @@ more file descriptors, but potentially provide better performance on workloads involving lots of seeks. + + - dfs.namenode.name.dir.restore true Set to true to enable NameNode to attempt recovering a previously failed dfs.namenode.name.dir. When enabled, a recovery of any failed directory is attempted during checkpoint. + + - dfs.http.policy HTTP_ONLY @@ -512,6 +551,7 @@ The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : Service is provided only on https - HTTP_AND_HTTPS : Service is provided both on http and https + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-client.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-client.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-client.xml index 809d5c5..f198565 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-client.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-client.xml @@ -1,6 +1,5 @@ - - - ssl.client.truststore.location - /etc/security/clientKeys/all.jks - Location of the trust store file. - - - ssl.client.truststore.type - jks - Optional. Default value is "jks". - - - ssl.client.truststore.password - bigdata - PASSWORD - Password to open the trust store file. - - password - - - - ssl.client.truststore.reload.interval - 10000 - Truststore reload interval, in milliseconds. - - - ssl.client.keystore.type - jks - Optional. Default value is "jks". - - - ssl.client.keystore.location - /etc/security/clientKeys/keystore.jks - Location of the keystore file. - - - ssl.client.keystore.password - bigdata - PASSWORD - Password to open the keystore file. - - password - - + + ssl.client.truststore.location + /etc/security/clientKeys/all.jks + Location of the trust store file. + + + + + ssl.client.truststore.type + jks + Optional. Default value is "jks". + + + + + ssl.client.truststore.password + bigdata + PASSWORD + Password to open the trust store file. + + password + + + + + + ssl.client.truststore.reload.interval + 10000 + Truststore reload interval, in milliseconds. + + + + + ssl.client.keystore.type + jks + Optional. Default value is "jks". + + + + + ssl.client.keystore.location + /etc/security/clientKeys/keystore.jks + Location of the keystore file. + + + + + ssl.client.keystore.password + bigdata + PASSWORD + Password to open the keystore file. + + password + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-server.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-server.xml b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-server.xml index 32199c0..176efaa 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-server.xml +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/ssl-server.xml @@ -1,6 +1,5 @@ - - - ssl.server.truststore.location - /etc/security/serverKeys/all.jks - Location of the trust store file. - - - ssl.server.truststore.type - jks - Optional. Default value is "jks". - - - ssl.server.truststore.password - bigdata - PASSWORD - Password to open the trust store file. - - password - - - - ssl.server.truststore.reload.interval - 10000 - Truststore reload interval, in milliseconds. - - - ssl.server.keystore.type - jks - Optional. Default value is "jks". - - - ssl.server.keystore.location - /etc/security/serverKeys/keystore.jks - Location of the keystore file. - - - ssl.server.keystore.password - bigdata - PASSWORD - Password to open the keystore file. - - password - - - - ssl.server.keystore.keypassword - bigdata - PASSWORD - Password for private key in keystore file. - - password - - + + ssl.server.truststore.location + /etc/security/serverKeys/all.jks + Location of the trust store file. + + + + + ssl.server.truststore.type + jks + Optional. Default value is "jks". + + + + + ssl.server.truststore.password + bigdata + PASSWORD + Password to open the trust store file. + + password + + + + + + ssl.server.truststore.reload.interval + 10000 + Truststore reload interval, in milliseconds. + + + + + ssl.server.keystore.type + jks + Optional. Default value is "jks". + + + + + ssl.server.keystore.location + /etc/security/serverKeys/keystore.jks + Location of the keystore file. + + + + + ssl.server.keystore.password + bigdata + PASSWORD + Password to open the keystore file. + + password + + + + + + ssl.server.keystore.keypassword + bigdata + PASSWORD + Password for private key in keystore file. + + password + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hcat-env.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hcat-env.xml b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hcat-env.xml index b239561..0fd1c06 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hcat-env.xml +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hcat-env.xml @@ -19,7 +19,6 @@ * limitations under the License. */ --> - @@ -56,6 +55,7 @@ content + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml index 15d07dd..ee83ff0 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml @@ -19,7 +19,6 @@ * limitations under the License. */ --> - hive.client.heapsize @@ -34,8 +33,9 @@ 512 false + + - hive.metastore.heapsize 1024 @@ -48,8 +48,9 @@ MB 512 + + - hive_database_type mysql @@ -64,6 +65,8 @@ hive_database + + hive_database @@ -75,11 +78,15 @@ false + + hive_ambari_database MySQL Database type. + + hive_database_name @@ -91,6 +98,8 @@ true false + + hive_log_dir @@ -101,6 +110,8 @@ directory false + + hive_pid_dir @@ -112,6 +123,8 @@ false true + + hive_user @@ -123,10 +136,10 @@ user false + + - - hcat_log_dir /var/log/webhcat @@ -136,6 +149,8 @@ directory false + + hcat_pid_dir @@ -147,6 +162,8 @@ false true + + hcat_user @@ -158,6 +175,8 @@ user false + + webhcat_user @@ -169,20 +188,23 @@ user false + + - hive_user_nofile_limit 32000 Max open files limit setting for HIVE user. + + - hive_user_nproc_limit 16000 Max number of processes limit setting for HIVE user. + + - content @@ -234,6 +256,7 @@ export HADOOP_CLASSPATH={{atlas_conf_dir}}:{{atlas_home_dir}}/hook/hive:${HADOOP content + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-exec-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-exec-log4j.xml b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-exec-log4j.xml index b7f4200..7ce7e47 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-exec-log4j.xml +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-exec-log4j.xml @@ -19,9 +19,7 @@ * limitations under the License. */ --> - - content hive-exec-log4j template @@ -111,6 +109,7 @@ log4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO=WARN,FA content false + + - http://git-wip-us.apache.org/repos/asf/ambari/blob/6919aa50/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-log4j.xml b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-log4j.xml index d017530..b6675cf 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-log4j.xml +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-log4j.xml @@ -19,9 +19,7 @@ * limitations under the License. */ --> - - content hive-log4j template @@ -120,6 +118,7 @@ log4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO=WARN,DRFA content false + + -