Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D02A4200B40 for ; Wed, 1 Jun 2016 19:22:51 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id CEEFF160A4E; Wed, 1 Jun 2016 17:22:51 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 92A59160A4F for ; Wed, 1 Jun 2016 19:22:49 +0200 (CEST) Received: (qmail 81964 invoked by uid 500); 1 Jun 2016 17:22:47 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 80864 invoked by uid 99); 1 Jun 2016 17:22:46 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Jun 2016 17:22:46 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 4DC53E9675; Wed, 1 Jun 2016 17:22:46 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: dmitriusan@apache.org To: commits@ambari.apache.org Date: Wed, 01 Jun 2016 17:23:11 -0000 Message-Id: <0b520fa6bef042108fb40aad98716fdd@git.apache.org> In-Reply-To: <6a68b452e7d543b78e519317154950c0@git.apache.org> References: <6a68b452e7d543b78e519317154950c0@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [27/51] [partial] ambari git commit: AMBARI-16272. Ambari Upgrade shouldn't automatically add stack configs. Fix default upgrade policy (dlysnichenko) archived-at: Wed, 01 Jun 2016 17:22:52 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml index c432cc1..f21bea7 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml @@ -22,8 +22,8 @@ yarn.application.classpath $HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/* Classpath for typical applications. - - + + hadoop.registry.rm.enabled @@ -31,8 +31,11 @@ Is the registry enabled: does the RM start it up, create the user and system paths, and purge service records when containers, application attempts and applications complete - - + + boolean + + + hadoop.registry.zk.quorum @@ -41,15 +44,15 @@ List of hostname:port pairs defining the zookeeper quorum binding for the registry - - + + yarn.nodemanager.recovery.enabled true Enable the node manager to recover after starting - - + + yarn.nodemanager.recovery.dir @@ -58,22 +61,22 @@ The local filesystem directory in which the node manager will store state when recovery is enabled. - - + + yarn.client.nodemanager-connect.retry-interval-ms 10000 Time interval between each attempt to connect to NM - - + + yarn.client.nodemanager-connect.max-wait-ms 60000 Max time to wait to establish a connection to NM - - + + yarn.resourcemanager.recovery.enabled @@ -82,8 +85,8 @@ Enable RM to recover state after starting. If true, then yarn.resourcemanager.store.class must be specified. - - + + yarn.resourcemanager.work-preserving-recovery.enabled @@ -95,8 +98,8 @@ boolean - - + + yarn.resourcemanager.store.class @@ -107,8 +110,8 @@ the store is implicitly fenced; meaning a single ResourceManager is able to use the store at any point in time. - - + + yarn.resourcemanager.zk-address @@ -117,43 +120,43 @@ List Host:Port of the ZooKeeper servers to be used by the RM. comma separated host:port pairs, each corresponding to a zk server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002" If the optional chroot suffix is used the example would look like: "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002/app/a" where the client would be rooted at "/app/a" and all paths would be relative to this root - ie getting/setting/etc... "/foo/bar" would result in operations being run on "/app/a/foo/bar" (from the server perspective). - - + + yarn.resourcemanager.zk-state-store.parent-path /rmstore Full path of the ZooKeeper znode where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore as the value for yarn.resourcemanager.store.class - - + + yarn.resourcemanager.zk-acl world:anyone:rwcda ACL's to be used for ZooKeeper znodes. - - + + yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms 10000 Set the amount of time RM waits before allocating new containers on work-preserving-recovery. Such wait period gives RM a chance to settle down resyncing with NMs in the cluster on recovery, before assigning new containers to applications. - - + + yarn.resourcemanager.connect.retry-interval.ms 30000 How often to try connecting to the ResourceManager. - - + + yarn.resourcemanager.connect.max-wait.ms 900000 Maximum time to wait to establish connection to ResourceManager - - + + yarn.resourcemanager.zk-retry-interval-ms @@ -163,78 +166,78 @@ automatically from yarn.resourcemanager.zk-timeout-ms and yarn.resourcemanager.zk-num-retries." - - + + yarn.resourcemanager.zk-num-retries 1000 Number of times RM tries to connect to ZooKeeper. - - + + yarn.resourcemanager.zk-timeout-ms 10000 ZooKeeper session timeout in milliseconds. Session expiration is managed by the ZooKeeper cluster itself, not by the client. This value is used by the cluster to determine when the client's session expires. Expirations happens when the cluster does not hear from the client within the specified session timeout period (i.e. no heartbeat). - - + + yarn.resourcemanager.state-store.max-completed-applications ${yarn.resourcemanager.max-completed-applications} The maximum number of completed applications RM state store keeps, less than or equals to ${yarn.resourcemanager.max-completed-applications}. By default, it equals to ${yarn.resourcemanager.max-completed-applications}. This ensures that the applications kept in the state store are consistent with the applications remembered in RM memory. Any values larger than ${yarn.resourcemanager.max-completed-applications} will be reset to ${yarn.resourcemanager.max-completed-applications}. Note that this value impacts the RM recovery performance.Typically, a smaller value indicates better performance on RM recovery. - - + + yarn.resourcemanager.fs.state-store.retry-policy-spec 2000, 500 hdfs client retry policy specification. hdfs client retry is always enabled. Specified in pairs of sleep-time and number-of-retries and (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. - - + + yarn.resourcemanager.fs.state-store.uri RI pointing to the location of the FileSystem path where RM state will be stored. This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore as the value for yarn.resourcemanager.store.class - - + + yarn.resourcemanager.ha.enabled false enable RM HA or not - - + + yarn.nodemanager.linux-container-executor.resources-handler.class org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler Pre-requisite to use CGroups - - + + yarn.nodemanager.linux-container-executor.cgroups.hierarchy hadoop-yarn Name of the Cgroups hierarchy under which all YARN jobs will be launched - - + + yarn.nodemanager.linux-container-executor.cgroups.mount false If true, YARN will automount the CGroup, however the directory needs to already exist; else, the cgroup should be mounted by the admin - - + + yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage false Strictly limit CPU resource usage to allocated usage even if spare CPU is available - - + + yarn.nodemanager.resource.cpu-vcores @@ -252,8 +255,8 @@ yarn.nodemanager.resource.percentage-physical-cpu-limit - - + + yarn.nodemanager.resource.percentage-physical-cpu-limit @@ -266,43 +269,43 @@ 100 1 - - + + yarn.node-labels.manager-class org.apache.hadoop.yarn.server.resourcemanager.nodelabels.MemoryRMNodeLabelsManager If user want to enable this feature, specify it to "org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager - - + + yarn.node-labels.fs-store.retry-policy-spec 2000, 500 - - + + yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb 1000 This is related to disk size on the machines, admins should set one of yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb or yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage but not both. If both are set, the more conservative value will be used - - + + yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage 90 This is related to disk size on the machines, admins should set one of yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb or yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage but not both. If both are set, the more conservative value will be used - - + + yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds -1 Defines how often NMs wake up to upload log files. The default value is -1. By default, the logs will be uploaded whenthe application is finished. By setting this configure, logs can be uploaded periodically when the application is running. The minimum rolling-interval-seconds can be set is 3600. - - + + yarn.nodemanager.log-aggregation.debug-enabled @@ -311,43 +314,43 @@ This configuration is for debug and test purpose. By setting this configuration as true. We can break the lower bound of yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds - - + + yarn.nodemanager.log-aggregation.num-log-files-per-app 30 This is temporary solution. The configuration will be deleted once, we find a more scalable method to only write a single log file per LRS. - - + + yarn.resourcemanager.system-metrics-publisher.enabled true - - + + yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size 10 - - + + yarn.timeline-service.client.max-retries 30 - - + + yarn.timeline-service.client.retry-interval-ms 1000 - - + + yarn.timeline-service.ttl-enable @@ -358,8 +361,8 @@ boolean - - + + yarn.timeline-service.recovery.enabled @@ -367,15 +370,15 @@ Enable timeline server to recover state after starting. If true, then yarn.timeline-service.state-store-class must be specified. - - + + yarn.timeline-service.state-store-class org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore Store class name for timeline state store. - - + + yarn.timeline-service.leveldb-state-store.path @@ -384,8 +387,8 @@ directory - - + + yarn.timeline-service.leveldb-timeline-store.path @@ -394,8 +397,8 @@ directory - - + + yarn.timeline-service.leveldb-timeline-store.read-cache-size @@ -403,8 +406,8 @@ Size of read cache for uncompressed blocks for leveldb timeline store in bytes. - - + + yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size @@ -412,8 +415,8 @@ Size of cache for recently read entity start times for leveldb timeline store in number of entities. - - + + yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size @@ -421,8 +424,8 @@ Size of cache for recently written entity start times for leveldb timeline store in number of entities. - - + + yarn.timeline-service.http-authentication.type @@ -431,15 +434,15 @@ Defines authentication used for the Timeline Server HTTP endpoint. Supported values are: simple | kerberos | $AUTHENTICATION_HANDLER_CLASSNAME - - + + yarn.timeline-service.http-authentication.simple.anonymous.allowed true - - + + yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled @@ -450,36 +453,36 @@ tokens(fallback to kerberos if the tokens are missing). Only applicable when the http authentication type is kerberos. - - + + yarn.resourcemanager.bind-host 0.0.0.0 Default value is 0.0.0.0, when this is set the service will bind on all interfaces. I think these two options (blank, "0.0.0.0" sans quotes) should be the two available values, with blank as the default. - - + + yarn.nodemanager.bind-host 0.0.0.0 Default value is 0.0.0.0, when this is set the service will bind on all interfaces. I think these two options (blank, "0.0.0.0" sans quotes) should be the two available values, with blank as the default. - - + + yarn.timeline-service.bind-host 0.0.0.0 Default value is 0.0.0.0, when this is set the service will bind on all interfaces. I think these two options (blank, "0.0.0.0" sans quotes) should be the two available values, with blank as the default. - - + + yarn.node-labels.fs-store.root-dir /system/yarn/node-labels - - + + yarn.scheduler.minimum-allocation-vcores @@ -498,8 +501,8 @@ yarn.nodemanager.resource.cpu-vcores - - + + yarn.scheduler.maximum-allocation-vcores @@ -518,8 +521,8 @@ yarn.nodemanager.resource.cpu-vcores - - + + yarn.node-labels.enabled @@ -542,16 +545,16 @@ 1 - - + + yarn.node-labels.manager-class org.apache.hadoop.yarn.server.resourcemanager.nodelabels.MemoryRMNodeLabelsManager If user want to enable this feature, specify it to "org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager true - - + + yarn.nodemanager.container-executor.class @@ -567,8 +570,8 @@ hadoop.security.authentication - - + + yarn.nodemanager.linux-container-executor.group @@ -584,8 +587,8 @@ user_group - - + + yarn.resourcemanager.scheduler.monitor.enable @@ -605,7 +608,7 @@ 1 - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/core-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/core-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/core-site.xml index f9a1eac..0276c13 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/core-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/core-site.xml @@ -22,76 +22,76 @@ fs.defaultFS Provide VIPRFS bucket details using the format viprfs://$BUCKET_NAME.$NAMESPACE.$SITE_NAME_from_fs.vipr.installations - - + + hadoop.security.authentication simple Supported values: simple, kerberos - - + + hadoop.security.authorization false Supported values true, false - - + + hadoop.security.auth_to_local DEFAULT - - + + fs.permissions.umask-mode 022 - - + + fs.vipr.installations Site1 Provide site name of the tenant - - + + fs.vipr.installation.Site1.hosts Provide ECS node IPs or VIP - - + + fs.vipr.installation.Site1.resolution dynamic - - + + fs.vipr.installation.Site1.resolution.dynamic.time_to_live_ms 900000 - - + + fs.viprfs.auth.anonymous_translation LOCAL_USER true Supported values are LOCAL_USER. Applicable only for insecure cluster deployment. - - + + fs.viprfs.auth.identity_translation NONE Supported values are NONE(default), FIXED_REALM, and CURRENT_USER_REALM - - + + @@ -147,7 +147,7 @@ export JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:/usr/lib/hadoop/lib/native/Linux-a content - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/hdfs-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/hdfs-site.xml index 64d59f0..f0de7e3 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/hdfs-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ECS/configuration/hdfs-site.xml @@ -23,13 +23,13 @@ dfs.permissions.enabled true - - + + dfs.permissions.superusergroup hdfs - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-env.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-env.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-env.xml index 9efa800..12bab1f 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-env.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-env.xml @@ -105,7 +105,7 @@ export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xmn{{regionserver_xmn_ content - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-site.xml index cc1f666..3396e00 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/HBASE/configuration/hbase-site.xml @@ -23,7 +23,7 @@ hbase.rootdir - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/TEZ/configuration/tez-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/TEZ/configuration/tez-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/TEZ/configuration/tez-site.xml index 60fc0fe..215605d 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/TEZ/configuration/tez-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/TEZ/configuration/tez-site.xml @@ -21,7 +21,7 @@ tez.cluster.additional.classpath.prefix /usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure:/usr/lib/hadoop/lib/* - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration-mapred/mapred-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration-mapred/mapred-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration-mapred/mapred-site.xml index dd528ef..c1d2ebf 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration-mapred/mapred-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration-mapred/mapred-site.xml @@ -25,7 +25,7 @@ CLASSPATH for MR applications. A comma-separated list of CLASSPATH entries. - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration/yarn-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration/yarn-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration/yarn-site.xml index 208e530..07188fa 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration/yarn-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/YARN/configuration/yarn-site.xml @@ -22,7 +22,7 @@ yarn.application.classpath $HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*,/usr/lib/hadoop/lib/* Classpath for typical applications. - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/ACCUMULO/configuration/accumulo-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/ACCUMULO/configuration/accumulo-log4j.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/ACCUMULO/configuration/accumulo-log4j.xml index 006edbc..8f28baf 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/ACCUMULO/configuration/accumulo-log4j.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/ACCUMULO/configuration/accumulo-log4j.xml @@ -24,44 +24,44 @@ audit_log_level OFF Log level for audit logging - - + + monitor_forwarding_log_level WARN Log level for logging forwarded to the Accumulo Monitor - - + + debug_log_size 512M Size of each debug rolling log file - - + + debug_num_logs 10 Number of rolling debug log files to keep - - + + info_log_size 512M Size of each info rolling log file - - + + info_num_logs 10 Number of rolling info log files to keep - - + + content @@ -115,7 +115,7 @@ log4j.appender.A1.layout=org.apache.log4j.PatternLayout content - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/core-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/core-site.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/core-site.xml index f3e0d27..f0b16c6 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/core-site.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/core-site.xml @@ -23,29 +23,29 @@ fs.AbstractFileSystem.glusterfs.impl org.apache.hadoop.fs.local.GlusterFs GlusterFS Abstract File System Implementation - - + + fs.glusterfs.impl GlusterFS fs impl org.apache.hadoop.fs.glusterfs.GlusterFileSystem - - + + fs.defaultFS glusterfs:///localhost:8020 - - + + ha.failover-controller.active-standby-elector.zk.op.retries 120 ZooKeeper Failover Controller retries setting for your environment - - + + @@ -55,24 +55,24 @@ The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations. - - + + io.serializations org.apache.hadoop.io.serializer.WritableSerialization A list of comma-delimited serialization classes that can be used for obtaining serializers and deserializers. - - + + io.compression.codecs org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec A list of the compression codec classes that can be used for compression/decompression. - - + + fs.trash.interval @@ -83,8 +83,8 @@ If trash is disabled server side then the client side configuration is checked. If trash is enabled on the server side then the value configured on the server is used and the client configuration value is ignored. - - + + @@ -93,8 +93,8 @@ Defines the threshold number of connections after which connections will be inspected for idleness. - - + + ipc.client.connection.maxidletime @@ -102,15 +102,15 @@ The maximum time after which a client will bring down the connection to the server. - - + + ipc.client.connect.max.retries 50 Defines the maximum number of retries for IPC connections. - - + + ipc.server.tcpnodelay @@ -121,8 +121,8 @@ decrease latency with a cost of more/smaller packets. - - + + @@ -133,8 +133,8 @@ not be exposed to public. Enable this option if the interfaces are only reachable by those who have the right authorization. - - + + hadoop.security.authentication @@ -143,8 +143,8 @@ Set the authentication for the cluster. Valid values are: simple or kerberos. - - + + hadoop.security.authorization @@ -152,8 +152,8 @@ Enable authorization for different protocols. - - + + hadoop.security.auth_to_local @@ -196,8 +196,8 @@ If you want to treat all principals from APACHE.ORG with /admin as "admin", your RULE[2:$1%$2@$0](.%admin@APACHE.ORG)s/./admin/ DEFAULT - - + + net.topology.script.file.name @@ -205,7 +205,7 @@ DEFAULT Location of topology script used by Hadoop to determine the rack location of nodes. - - + + http://git-wip-us.apache.org/repos/asf/ambari/blob/70aacc6d/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/hadoop-env.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/hadoop-env.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/hadoop-env.xml index 8baa6da..58bda2b 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/hadoop-env.xml +++ b/ambari-server/src/main/resources/stacks/HDP/2.3.GlusterFS/services/GLUSTERFS/configuration/hadoop-env.xml @@ -30,8 +30,8 @@ true false - - + + hadoop_heapsize @@ -41,8 +41,8 @@ MB - - + + glusterfs_user @@ -52,8 +52,8 @@ false - - + + hdfs_log_dir_prefix @@ -65,8 +65,8 @@ true false - - + + namenode_heapsize @@ -78,22 +78,22 @@ true false - - + + namenode_host NameNode Host. - - + + snamenode_host Secondary NameNode. - - + + proxyuser_group @@ -103,16 +103,16 @@ false - - + + hdfs_user HDFS User hdfs User to run HDFS as - - + +