Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 80B5A200BE5 for ; Fri, 18 Nov 2016 23:50:00 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 7FB83160B04; Fri, 18 Nov 2016 22:50:00 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 5D49B160B20 for ; Fri, 18 Nov 2016 23:49:58 +0100 (CET) Received: (qmail 30605 invoked by uid 500); 18 Nov 2016 22:49:57 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 30247 invoked by uid 99); 18 Nov 2016 22:49:57 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 18 Nov 2016 22:49:57 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 25455F1595; Fri, 18 Nov 2016 22:49:57 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: alejandro@apache.org To: commits@ambari.apache.org Date: Fri, 18 Nov 2016 22:50:04 -0000 Message-Id: In-Reply-To: <439a9ddab8c7404e89aab6967f22340b@git.apache.org> References: <439a9ddab8c7404e89aab6967f22340b@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [08/13] ambari git commit: AMBARI-18928. Perf: Add Hadoop Core services to PERF stack (alejandro) archived-at: Fri, 18 Nov 2016 22:50:00 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-env.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-env.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-env.xml new file mode 100644 index 0000000..51cbf4a --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-env.xml @@ -0,0 +1,419 @@ + + + + + + + hdfs_log_dir_prefix + /var/log/hadoop + Hadoop Log Dir Prefix + Hadoop Log Dir Prefix + + directory + false + + + + + hadoop_pid_dir_prefix + /var/run/hadoop + Hadoop PID Dir Prefix + Hadoop PID Dir Prefix + + directory + false + true + + + + + hadoop_root_logger + INFO,RFA + Hadoop Root Logger + Hadoop Root Logger + + false + + + + + hadoop_heapsize + 1024 + Hadoop maximum Java heap size + Hadoop maximum Java heap size + + int + MB + false + + + + + namenode_heapsize + 1024 + NameNode Java heap size + NameNode Java heap size + + int + 0 + 268435456 + MB + 256 + false + + + + hdfs-site + dfs.datanode.data.dir + + + + + + namenode_opt_newsize + 200 + Default size of Java new generation for NameNode (Java option -XX:NewSize) Note: The value of NameNode new generation size (default size of Java new generation for NameNode (Java option -XX:NewSize)) should be 1/8 of maximum heap size (-Xmx). Ensure that the value of the namenode_opt_newsize property is 1/8 the value of maximum heap size (-Xmx). + NameNode new generation size + + + hadoop-env + namenode_heapsize + + + + int + 0 + 16384 + MB + 256 + false + + + + + namenode_opt_maxnewsize + 200 + NameNode maximum new generation size + NameNode maximum new generation size + + + hadoop-env + namenode_heapsize + + + + int + 0 + 16384 + MB + 256 + false + + + + + namenode_opt_permsize + 128 + NameNode permanent generation size + NameNode permanent generation size + + int + 0 + 2096 + MB + 128 + false + + + + + namenode_opt_maxpermsize + 256 + NameNode maximum permanent generation size + NameNode maximum permanent generation size + + int + 0 + 2096 + MB + 128 + false + + + + + dtnode_heapsize + 1024 + DataNode maximum Java heap size + DataNode maximum Java heap size + + int + 0 + 268435456 + MB + 128 + + + + + proxyuser_group + Proxy User Group + users + GROUP + Proxy user group. + + user + false + + + + + hdfs_user + HDFS User + hdfs + USER + User to run HDFS as + + user + false + + + + + hdfs_tmp_dir + /tmp + HDFS tmp Dir + HDFS tmp Dir + NOT_MANAGED_HDFS_PATH + + true + false + false + + + + + hdfs_user_nofile_limit + 128000 + Max open files limit setting for HDFS user. + + + + hdfs_user_nproc_limit + 65536 + Max number of processes limit setting for HDFS user. + + + + namenode_backup_dir + Local directory for storing backup copy of NameNode images during upgrade + /tmp/upgrades + + + + hdfs_user_keytab + HDFS keytab path + + + + hdfs_principal_name + HDFS principal name + + + + + + keyserver_host + + Key Server Host + Hostnames where Key Management Server is installed + + string + + + + + keyserver_port + + Key Server Port + Port number where Key Management Server is available + + int + true + + + + + + + + content + hadoop-env template + This is the jinja template for hadoop-env.sh file + +# Set Hadoop-specific environment variables here. + +# The only required environment variable is JAVA_HOME. All others are +# optional. When running a distributed configuration it is best to +# set JAVA_HOME in this file, so that it is correctly defined on +# remote nodes. + +# The java implementation to use. Required. +export JAVA_HOME={{java_home}} +export HADOOP_HOME_WARN_SUPPRESS=1 + +# Hadoop home directory +export HADOOP_HOME=${HADOOP_HOME:-{{hadoop_home}}} + +# Hadoop Configuration Directory + +{# this is different for HDP1 #} +# Path to jsvc required by secure HDP 2.0 datanode +export JSVC_HOME={{jsvc_path}} + + +# The maximum amount of heap to use, in MB. Default is 1000. +export HADOOP_HEAPSIZE="{{hadoop_heapsize}}" + +export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms{{namenode_heapsize}}" + +# Extra Java runtime options. Empty by default. +export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}" + +# Command specific options appended to HADOOP_OPTS when specified +HADOOP_JOBTRACKER_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{jtnode_opt_newsize}} -XX:MaxNewSize={{jtnode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx{{jtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}" + +HADOOP_TASKTRACKER_OPTS="-server -Xmx{{ttnode_heapsize}} -Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}" + +{% if java_version < 8 %} +SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -XX:PermSize={{namenode_opt_permsize}} -XX:MaxPermSize={{namenode_opt_maxpermsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT" +export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}" +export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly" + +export HADOOP_SECONDARYNAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\" ${HADOOP_SECONDARYNAMENODE_OPTS}" + +# The following applies to multiple commands (fs, dfs, fsck, distcp etc) +export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m -XX:MaxPermSize=512m $HADOOP_CLIENT_OPTS" + +{% else %} +SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT" +export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}" +export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS} -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly" + +export HADOOP_SECONDARYNAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\" ${HADOOP_SECONDARYNAMENODE_OPTS}" + +# The following applies to multiple commands (fs, dfs, fsck, distcp etc) +export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m $HADOOP_CLIENT_OPTS" +{% endif %} + +HADOOP_NFS3_OPTS="-Xmx{{nfsgateway_heapsize}}m -Dhadoop.security.logger=ERROR,DRFAS ${HADOOP_NFS3_OPTS}" +HADOOP_BALANCER_OPTS="-server -Xmx{{hadoop_heapsize}}m ${HADOOP_BALANCER_OPTS}" + + +# On secure datanodes, user to run the datanode as after dropping privileges +export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER:-{{hadoop_secure_dn_user}}} + +# Extra ssh options. Empty by default. +export HADOOP_SSH_OPTS="-o ConnectTimeout=5 -o SendEnv=HADOOP_CONF_DIR" + +# Where log files are stored. $HADOOP_HOME/logs by default. +export HADOOP_LOG_DIR={{hdfs_log_dir_prefix}}/$USER + +# History server logs +export HADOOP_MAPRED_LOG_DIR={{mapred_log_dir_prefix}}/$USER + +# Where log files are stored in the secure data environment. +export HADOOP_SECURE_DN_LOG_DIR={{hdfs_log_dir_prefix}}/$HADOOP_SECURE_DN_USER + +# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default. +# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves + +# host:path where hadoop code should be rsync'd from. Unset by default. +# export HADOOP_MASTER=master:/home/$USER/src/hadoop + +# Seconds to sleep between slave commands. Unset by default. This +# can be useful in large clusters, where, e.g., slave rsyncs can +# otherwise arrive faster than the master can service them. +# export HADOOP_SLAVE_SLEEP=0.1 + +# The directory where pid files are stored. /tmp by default. +export HADOOP_PID_DIR={{hadoop_pid_dir_prefix}}/$USER +export HADOOP_SECURE_DN_PID_DIR={{hadoop_pid_dir_prefix}}/$HADOOP_SECURE_DN_USER + +# History server pid +export HADOOP_MAPRED_PID_DIR={{mapred_pid_dir_prefix}}/$USER + +YARN_RESOURCEMANAGER_OPTS="-Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY" + +# A string representing this instance of hadoop. $USER by default. +export HADOOP_IDENT_STRING=$USER + +# The scheduling priority for daemon processes. See 'man nice'. + +# export HADOOP_NICENESS=10 + +# Add database libraries +JAVA_JDBC_LIBS="" +if [ -d "/usr/share/java" ]; then + for jarFile in `ls /usr/share/java | grep -E "(mysql|ojdbc|postgresql|sqljdbc)" 2>/dev/null` + do + JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile + done +fi + +# Add libraries to the hadoop classpath - some may not need a colon as they already include it +export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}${JAVA_JDBC_LIBS} + +# Setting path to hdfs command line +export HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}} + +# Mostly required for hadoop 2.0 +export JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH} + +export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS" + + +# Fix temporary bug, when ulimit from conf files is not picked up, without full relogin. +# Makes sense to fix only when runing DN as root +if [ "$command" == "datanode" ] && [ "$EUID" -eq 0 ] && [ -n "$HADOOP_SECURE_DN_USER" ]; then + {% if is_datanode_max_locked_memory_set %} + ulimit -l {{datanode_max_locked_memory}} + {% endif %} + ulimit -n {{hdfs_user_nofile_limit}} +fi + + + content + + + + + nfsgateway_heapsize + NFSGateway maximum Java heap size + 1024 + Maximum Java heap size for NFSGateway (Java option -Xmx) + + int + MB + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-metrics2.properties.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-metrics2.properties.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-metrics2.properties.xml new file mode 100644 index 0000000..6b45e84 --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-metrics2.properties.xml @@ -0,0 +1,125 @@ + + + + + + + + content + hadoop-metrics2.properties template + This is the jinja template for hadoop-metrics2.properties file + +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +# syntax: [prefix].[source|sink|jmx].[instance].[options] +# See package.html for org.apache.hadoop.metrics2 for details + +{% if has_ganglia_server %} +*.period=60 + +*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31 +*.sink.ganglia.period=10 + +# default for supportsparse is false +*.sink.ganglia.supportsparse=true + +.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both +.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40 + +# Hook up to the server +namenode.sink.ganglia.servers={{ganglia_server_host}}:8661 +datanode.sink.ganglia.servers={{ganglia_server_host}}:8659 +jobtracker.sink.ganglia.servers={{ganglia_server_host}}:8662 +tasktracker.sink.ganglia.servers={{ganglia_server_host}}:8658 +maptask.sink.ganglia.servers={{ganglia_server_host}}:8660 +reducetask.sink.ganglia.servers={{ganglia_server_host}}:8660 +resourcemanager.sink.ganglia.servers={{ganglia_server_host}}:8664 +nodemanager.sink.ganglia.servers={{ganglia_server_host}}:8657 +historyserver.sink.ganglia.servers={{ganglia_server_host}}:8666 +journalnode.sink.ganglia.servers={{ganglia_server_host}}:8654 +nimbus.sink.ganglia.servers={{ganglia_server_host}}:8649 +supervisor.sink.ganglia.servers={{ganglia_server_host}}:8650 + +resourcemanager.sink.ganglia.tagsForPrefix.yarn=Queue + +{% endif %} + +{% if has_metric_collector %} + +*.period={{metrics_collection_period}} +*.sink.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar +*.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink +*.sink.timeline.period={{metrics_collection_period}} +*.sink.timeline.sendInterval={{metrics_report_interval}}000 +*.sink.timeline.slave.host.name={{hostname}} +*.sink.timeline.zookeeper.quorum={{zookeeper_quorum}} +*.sink.timeline.protocol={{metric_collector_protocol}} +*.sink.timeline.port={{metric_collector_port}} + +# HTTPS properties +*.sink.timeline.truststore.path = {{metric_truststore_path}} +*.sink.timeline.truststore.type = {{metric_truststore_type}} +*.sink.timeline.truststore.password = {{metric_truststore_password}} + +datanode.sink.timeline.collector.hosts={{ams_collector_hosts}} +namenode.sink.timeline.collector.hosts={{ams_collector_hosts}} +resourcemanager.sink.timeline.collector.hosts={{ams_collector_hosts}} +nodemanager.sink.timeline.collector.hosts={{ams_collector_hosts}} +jobhistoryserver.sink.timeline.collector.hosts={{ams_collector_hosts}} +journalnode.sink.timeline.collector.hosts={{ams_collector_hosts}} +maptask.sink.timeline.collector.hosts={{ams_collector_hosts}} +reducetask.sink.timeline.collector.hosts={{ams_collector_hosts}} +applicationhistoryserver.sink.timeline.collector.hosts={{ams_collector_hosts}} + +resourcemanager.sink.timeline.tagsForPrefix.yarn=Queue + +{% if is_nn_client_port_configured %} +# Namenode rpc ports customization +namenode.sink.timeline.metric.rpc.client.port={{nn_rpc_client_port}} +{% endif %} +{% if is_nn_dn_port_configured %} +namenode.sink.timeline.metric.rpc.datanode.port={{nn_rpc_dn_port}} +{% endif %} +{% if is_nn_healthcheck_port_configured %} +namenode.sink.timeline.metric.rpc.healthcheck.port={{nn_rpc_healthcheck_port}} +{% endif %} + +{% endif %} + + + content + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-policy.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-policy.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-policy.xml new file mode 100644 index 0000000..8e9486d --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hadoop-policy.xml @@ -0,0 +1,130 @@ + + + + + + + security.client.protocol.acl + * + ACL for ClientProtocol, which is used by user code + via the DistributedFileSystem. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.client.datanode.protocol.acl + * + ACL for ClientDatanodeProtocol, the client-to-datanode protocol + for block recovery. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.datanode.protocol.acl + * + ACL for DatanodeProtocol, which is used by datanodes to + communicate with the namenode. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.inter.datanode.protocol.acl + * + ACL for InterDatanodeProtocol, the inter-datanode protocol + for updating generation timestamp. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.namenode.protocol.acl + * + ACL for NamenodeProtocol, the protocol used by the secondary + namenode to communicate with the namenode. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.inter.tracker.protocol.acl + * + ACL for InterTrackerProtocol, used by the tasktrackers to + communicate with the jobtracker. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.job.client.protocol.acl + * + ACL for JobSubmissionProtocol, used by job clients to + communciate with the jobtracker for job submission, querying job status etc. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.job.task.protocol.acl + * + ACL for TaskUmbilicalProtocol, used by the map and reduce + tasks to communicate with the parent tasktracker. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.admin.operations.protocol.acl + hadoop + ACL for AdminOperationsProtocol. Used for admin commands. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + + security.refresh.usertogroups.mappings.protocol.acl + hadoop + ACL for RefreshUserMappingsProtocol. Used to refresh + users mappings. The ACL is a comma-separated list of user and + group names. The user and group list is separated by a blank. For + e.g. "alice,bob users,wheel". A special value of "*" means all + users are allowed. + + + + security.refresh.policy.protocol.acl + hadoop + ACL for RefreshAuthorizationPolicyProtocol, used by the + dfsadmin and mradmin commands to refresh the security policy in-effect. + The ACL is a comma-separated list of user and group names. The user and + group list is separated by a blank. For e.g. "alice,bob users,wheel". + A special value of "*" means all users are allowed. + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-log4j.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-log4j.xml new file mode 100644 index 0000000..4bf4cfe --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-log4j.xml @@ -0,0 +1,225 @@ + + + + + + content + hdfs-log4j template + Custom log4j.properties + +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + + +# Define some default values that can be overridden by system properties +# To change daemon root logger use hadoop_root_logger in hadoop-env +hadoop.root.logger=INFO,console +hadoop.log.dir=. +hadoop.log.file=hadoop.log + + +# Define the root logger to the system property "hadoop.root.logger". +log4j.rootLogger=${hadoop.root.logger}, EventCounter + +# Logging Threshold +log4j.threshhold=ALL + +# +# Daily Rolling File Appender +# + +log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} + +# Rollver at midnight +log4j.appender.DRFA.DatePattern=.yyyy-MM-dd + +# 30-day backup +#log4j.appender.DRFA.MaxBackupIndex=30 +log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout + +# Pattern format: Date LogLevel LoggerName LogMessage +log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +# Debugging Pattern format +#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n + + +# +# console +# Add "console" to rootlogger above if you want to use this +# + +log4j.appender.console=org.apache.log4j.ConsoleAppender +log4j.appender.console.target=System.err +log4j.appender.console.layout=org.apache.log4j.PatternLayout +log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n + +# +# TaskLog Appender +# + +#Default values +hadoop.tasklog.taskid=null +hadoop.tasklog.iscleanup=false +hadoop.tasklog.noKeepSplits=4 +hadoop.tasklog.totalLogFileSize=100 +hadoop.tasklog.purgeLogSplits=true +hadoop.tasklog.logsRetainHours=12 + +log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender +log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} +log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} +log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} + +log4j.appender.TLA.layout=org.apache.log4j.PatternLayout +log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n + +# +#Security audit appender +# +hadoop.security.logger=INFO,console +hadoop.security.log.maxfilesize=256MB +hadoop.security.log.maxbackupindex=20 +log4j.category.SecurityLogger=${hadoop.security.logger} +hadoop.security.log.file=SecurityAuth.audit +log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} +log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout +log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd + +log4j.appender.RFAS=org.apache.log4j.RollingFileAppender +log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} +log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout +log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} +log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} + +# +# hdfs audit logging +# +hdfs.audit.logger=INFO,console +log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} +log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false +log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log +log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout +log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n +log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd + +# +# NameNode metrics logging. +# The default is to retain two namenode-metrics.log files up to 64MB each. +# +namenode.metrics.logger=INFO,NullAppender +log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger} +log4j.additivity.NameNodeMetricsLog=false +log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender +log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log +log4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout +log4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n +log4j.appender.NNMETRICSRFA.MaxBackupIndex=1 +log4j.appender.NNMETRICSRFA.MaxFileSize=64MB + +# +# mapred audit logging +# +mapred.audit.logger=INFO,console +log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} +log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false +log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender +log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log +log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout +log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n +log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd + +# +# Rolling File Appender +# + +log4j.appender.RFA=org.apache.log4j.RollingFileAppender +log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} + +# Logfile size and and 30-day backups +log4j.appender.RFA.MaxFileSize=256MB +log4j.appender.RFA.MaxBackupIndex=10 + +log4j.appender.RFA.layout=org.apache.log4j.PatternLayout +log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n +log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n + + +# Custom Logging levels + +hadoop.metrics.log.level=INFO +#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG +#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG +#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG +log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level} + +# Jets3t library +log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR + +# +# Null Appender +# Trap security logger on the hadoop client side +# +log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender + +# +# Event Counter Appender +# Sends counts of logging messages at different severity levels to Hadoop Metrics. +# +log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter + +# Removes "deprecated" messages +log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN + +# +# HDFS block state change log from block manager +# +# Uncomment the following to suppress normal block state change +# messages from BlockManager in NameNode. +#log4j.logger.BlockStateChange=WARN + + + content + false + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-logsearch-conf.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-logsearch-conf.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-logsearch-conf.xml new file mode 100644 index 0000000..d85a028 --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-logsearch-conf.xml @@ -0,0 +1,248 @@ + + + + + + service_name + Service name + Service name for Logsearch Portal (label) + HDFS + + + + component_mappings + Component mapping + Logsearch component logid mapping list (e.g.: COMPONENT1:logid1,logid2;COMPONENT2:logid3) + NAMENODE:hdfs_namenode;DATANODE:hdfs_datanode;SECONDARY_NAMENODE:hdfs_secondarynamenode;JOURNALNODE:hdfs_journalnode;ZKFC:hdfs_zkfc;NFS_GATEWAY:hdfs_nfs3 + + + + content + Logfeeder Config + Metadata jinja template for Logfeeder which contains grok patterns for reading service specific logs. + +{ + "input":[ + { + "type":"hdfs_datanode", + "rowtype":"service", + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hadoop-{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}-datanode-*.log" + }, + { + "type":"hdfs_namenode", + "rowtype":"service", + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hadoop-{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}-namenode-*.log" + }, + { + "type":"hdfs_journalnode", + "rowtype":"service", + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hadoop-{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}-journalnode-*.log" + }, + { + "type":"hdfs_secondarynamenode", + "rowtype":"service", + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hadoop-{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}-secondarynamenode-*.log" + }, + { + "type":"hdfs_zkfc", + "rowtype":"service", + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hadoop-{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}-zkfc-*.log" + }, + { + "type":"hdfs_nfs3", + "rowtype":"service", + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hadoop-{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}-nfs3-*.log" + }, + { + "type":"hdfs_audit", + "rowtype":"audit", + "is_enabled":"true", + "add_fields":{ + "logType":"HDFSAudit", + "enforcer":"hadoop-acl", + "repoType":"1", + "repo":"hdfs" + }, + "path":"{{default('/configurations/hadoop-env/hdfs_log_dir_prefix', '/var/log/hadoop')}}/{{default('configurations/hadoop-env/hdfs_user', 'hdfs')}}/hdfs-audit.log" + } + ], + "filter":[ + { + "filter":"grok", + "conditions":{ + "fields":{ + "type":[ + "hdfs_datanode", + "hdfs_journalnode", + "hdfs_secondarynamenode", + "hdfs_namenode", + "hdfs_zkfc", + "hdfs_nfs3" + ] + } + }, + "log4j_format":"%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n", + "multiline_pattern":"^(%{TIMESTAMP_ISO8601:logtime})", + "message_pattern":"(?m)^%{TIMESTAMP_ISO8601:logtime}%{SPACE}%{LOGLEVEL:level}%{SPACE}%{JAVACLASS:logger_name}%{SPACE}\\(%{JAVAFILE:file}:%{JAVAMETHOD:method}\\(%{INT:line_number}\\)\\)%{SPACE}-%{SPACE}%{GREEDYDATA:log_message}", + "post_map_values":{ + "logtime":{ + "map_date":{ + "target_date_pattern":"yyyy-MM-dd HH:mm:ss,SSS" + } + } + } + }, + { + "filter":"grok", + "conditions":{ + "fields":{ + "type":[ + "hdfs_audit" + ] + } + }, + "log4j_format":"%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n", + "multiline_pattern":"^(%{TIMESTAMP_ISO8601:evtTime})", + "message_pattern":"(?m)^%{TIMESTAMP_ISO8601:evtTime}%{SPACE}%{LOGLEVEL:level}%{SPACE}%{JAVACLASS:logger_name}:%{SPACE}%{GREEDYDATA:log_message}", + "post_map_values":{ + "evtTime":{ + "map_date":{ + "target_date_pattern":"yyyy-MM-dd HH:mm:ss,SSS" + } + } + } + }, + { + "filter":"keyvalue", + "sort_order":1, + "conditions":{ + "fields":{ + "type":[ + "hdfs_audit" + ] + } + }, + "source_field":"log_message", + "value_split":"=", + "field_split":"\t", + "post_map_values":{ + "src":{ + "map_fieldname":{ + "new_fieldname":"resource" + } + }, + "ip":{ + "map_fieldname":{ + "new_fieldname":"cliIP" + } + }, + "allowed":[ + { + "map_fieldvalue":{ + "pre_value":"true", + "post_value":"1" + } + }, + { + "map_fieldvalue":{ + "pre_value":"false", + "post_value":"0" + } + }, + { + "map_fieldname":{ + "new_fieldname":"result" + } + } + ], + "cmd":{ + "map_fieldname":{ + "new_fieldname":"action" + } + }, + "proto":{ + "map_fieldname":{ + "new_fieldname":"cliType" + } + }, + "callerContext":{ + "map_fieldname":{ + "new_fieldname":"req_caller_id" + } + } + } + }, + { + "filter":"grok", + "sort_order":2, + "source_field":"ugi", + "remove_source_field":"false", + "conditions":{ + "fields":{ + "type":[ + "hdfs_audit" + ] + } + }, + "message_pattern":"%{USERNAME:p_user}.+auth:%{USERNAME:p_authType}.+via %{USERNAME:k_user}.+auth:%{USERNAME:k_authType}|%{USERNAME:user}.+auth:%{USERNAME:authType}|%{USERNAME:x_user}", + "post_map_values":{ + "user":{ + "map_fieldname":{ + "new_fieldname":"reqUser" + } + }, + "x_user":{ + "map_fieldname":{ + "new_fieldname":"reqUser" + } + }, + "p_user":{ + "map_fieldname":{ + "new_fieldname":"reqUser" + } + }, + "k_user":{ + "map_fieldname":{ + "new_fieldname":"proxyUsers" + } + }, + "p_authType":{ + "map_fieldname":{ + "new_fieldname":"authType" + } + }, + "k_authType":{ + "map_fieldname":{ + "new_fieldname":"proxyAuthType" + } + } + } + } + ] + } + + + content + false + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-site.xml new file mode 100644 index 0000000..8912682 --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/hdfs-site.xml @@ -0,0 +1,633 @@ + + + + + + + + + dfs.namenode.name.dir + + /hadoop/hdfs/namenode + NameNode directories + Determines where on the local filesystem the DFS name node + should store the name table. If this is a comma-delimited list + of directories then the name table is replicated in all of the + directories, for redundancy. + true + + directories + false + + + + + dfs.support.append + true + to enable dfs append + true + + + + dfs.webhdfs.enabled + true + WebHDFS enabled + Whether to enable WebHDFS feature + true + + boolean + false + + + + + dfs.datanode.failed.volumes.tolerated + 0 + Number of failed disks a DataNode would tolerate before it stops offering service + true + DataNode failed disk tolerance + + int + 0 + 2 + 1 + + + + hdfs-site + dfs.datanode.data.dir + + + + + + dfs.datanode.data.dir + /hadoop/hdfs/data + DataNode directories + Determines where on the local filesystem an DFS data node + should store its blocks. If this is a comma-delimited + list of directories, then data will be stored in all named + directories, typically on different devices. + Directories that do not exist are ignored. + + true + + directories + + + + + dfs.hosts.exclude + /etc/hadoop/conf/dfs.exclude + Names a file that contains a list of hosts that are + not permitted to connect to the namenode. The full pathname of the + file must be specified. If the value is empty, no hosts are + excluded. + + + + + dfs.namenode.checkpoint.dir + /hadoop/hdfs/namesecondary + SecondaryNameNode Checkpoint directories + Determines where on the local filesystem the DFS secondary + name node should store the temporary images to merge. + If this is a comma-delimited list of directories then the image is + replicated in all of the directories for redundancy. + + + directories + false + + + + + dfs.namenode.checkpoint.edits.dir + ${dfs.namenode.checkpoint.dir} + Determines where on the local filesystem the DFS secondary + name node should store the temporary edits to merge. + If this is a comma-delimited list of directories then the edits are + replicated in all of the directories for redundancy. + Default value is same as dfs.namenode.checkpoint.dir + + + + + dfs.namenode.checkpoint.period + 21600 + HDFS Maximum Checkpoint Delay + The number of seconds between two periodic checkpoints. + + int + seconds + + + + + dfs.namenode.checkpoint.txns + 1000000 + The Secondary NameNode or CheckpointNode will create a checkpoint + of the namespace every 'dfs.namenode.checkpoint.txns' transactions, + regardless of whether 'dfs.namenode.checkpoint.period' has expired. + + + + + dfs.replication.max + 50 + Maximal block replication. + + + + + dfs.replication + 3 + Block replication + Default block replication. + + + int + + + + + dfs.heartbeat.interval + 3 + Determines datanode heartbeat interval in seconds. + + + + dfs.namenode.safemode.threshold-pct + 0.999 + + Specifies the percentage of blocks that should satisfy + the minimal replication requirement defined by dfs.namenode.replication.min. + Values less than or equal to 0 mean not to start in safe mode. + Values greater than 1 will make safe mode permanent. + + Minimum replicated blocks % + + float + 0.990 + 1.000 + 0.001 + + + + + dfs.datanode.balance.bandwidthPerSec + 6250000 + + Specifies the maximum amount of bandwidth that each datanode + can utilize for the balancing purpose in term of + the number of bytes per second. + + + + + dfs.https.port + 50470 + + This property is used by HftpFileSystem. + + + + + dfs.datanode.address + 0.0.0.0:50010 + + The datanode server address and port for data transfer. + + + + + dfs.datanode.http.address + 0.0.0.0:50075 + + The datanode http server address and port. + + + + + dfs.datanode.https.address + 0.0.0.0:50475 + + The datanode https server address and port. + + + + + dfs.blocksize + 134217728 + The default block size for new files. + + + + dfs.namenode.http-address + localhost:50070 + The name of the default file system. Either the + literal string "local" or a host:port for HDFS. + true + + + + dfs.namenode.rpc-address + localhost:8020 + RPC address that handles all clients requests. + + + + dfs.datanode.du.reserved + + 1073741824 + Reserved space for HDFS + Reserved space in bytes per volume. Always leave this much space free for non dfs use. + + + int + bytes + + + + hdfs-site + dfs.datanode.data.dir + + + + + + dfs.datanode.ipc.address + 0.0.0.0:8010 + + The datanode ipc server address and port. + If the port is 0 then the server will start on a free port. + + + + + dfs.blockreport.initialDelay + 120 + Delay for first block report in seconds. + + + + dfs.datanode.max.transfer.threads + 1024 + Specifies the maximum number of threads to use for transferring data in and out of the datanode. + DataNode max data transfer threads + + int + 0 + 48000 + + + + + + fs.permissions.umask-mode + 022 + + The octal umask used when creating files and directories. + + + + + dfs.permissions.enabled + true + + If "true", enable permission checking in HDFS. + If "false", permission checking is turned off, + but all other behavior is unchanged. + Switching from one parameter value to the other does not change the mode, + owner or group of files or directories. + + + + + dfs.permissions.superusergroup + hdfs + The name of the group of super-users. + + + + dfs.namenode.handler.count + 100 + Added to grow Queue size so that more client connections are allowed + NameNode Server threads + + int + 1 + 200 + + + + + dfs.block.access.token.enable + true + + If "true", access tokens are used as capabilities for accessing datanodes. + If "false", no access tokens are checked on accessing datanodes. + + + + + + dfs.namenode.secondary.http-address + localhost:50090 + Address of secondary namenode web server + + + + dfs.namenode.https-address + localhost:50470 + The https address where namenode binds + + + + dfs.datanode.data.dir.perm + 750 + DataNode directories permission + The permissions that should be there on dfs.datanode.data.dir + directories. The datanode will not come up if the permissions are + different on existing dfs.datanode.data.dir directories. If the directories + don't exist, they will be created with this permission. + + int + + + + + dfs.namenode.accesstime.precision + 0 + Access time precision + The access time for HDFS file is precise up to this value. + The default value is 1 hour. Setting a value of 0 disables + access times for HDFS. + + + int + + + + + dfs.cluster.administrators + hdfs + ACL for who all can view the default servlets in the HDFS + + + + dfs.namenode.avoid.read.stale.datanode + true + + Indicate whether or not to avoid reading from stale datanodes whose + heartbeat messages have not been received by the namenode for more than a + specified time interval. + + + + + dfs.namenode.avoid.write.stale.datanode + true + + Indicate whether or not to avoid writing to stale datanodes whose + heartbeat messages have not been received by the namenode for more than a + specified time interval. + + + + + dfs.namenode.write.stale.datanode.ratio + 1.0f + When the ratio of number stale datanodes to total datanodes marked is greater + than this ratio, stop avoiding writing to stale nodes so as to prevent causing hotspots. + + + + + dfs.namenode.stale.datanode.interval + 30000 + Datanode is stale after not getting a heartbeat in this interval in ms + + + + dfs.journalnode.http-address + 0.0.0.0:8480 + The address and port the JournalNode web UI listens on. + If the port is 0 then the server will start on a free port. + + + + dfs.journalnode.https-address + 0.0.0.0:8481 + The address and port the JournalNode HTTPS server listens on. + If the port is 0 then the server will start on a free port. + + + + + dfs.client.read.shortcircuit + true + HDFS Short-circuit read + + This configuration parameter turns on short-circuit local reads. + + + boolean + + + + + dfs.domain.socket.path + /var/lib/hadoop-hdfs/dn_socket + + This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients. + If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode. + + + + + dfs.client.read.shortcircuit.streams.cache.size + 4096 + + The DFSClient maintains a cache of recently opened file descriptors. This + parameter controls the size of that cache. Setting this higher will use + more file descriptors, but potentially provide better performance on + workloads involving lots of seeks. + + + + + dfs.namenode.name.dir.restore + true + Set to true to enable NameNode to attempt recovering a previously failed dfs.namenode.name.dir. + When enabled, a recovery of any failed directory is attempted during checkpoint. + + + + dfs.http.policy + HTTP_ONLY + + Decide if HTTPS(SSL) is supported on HDFS This configures the HTTP endpoint for HDFS daemons: + The following values are supported: - HTTP_ONLY : Service is provided only on http - HTTPS_ONLY : + Service is provided only on https - HTTP_AND_HTTPS : Service is provided both on http and https + + + + + + + dfs.namenode.audit.log.async + true + Whether to enable async auditlog + + + + dfs.namenode.fslock.fair + false + Whether fsLock is fair + + + + + + dfs.namenode.startup.delay.block.deletion.sec + 3600 + + The delay in seconds at which we will pause the blocks deletion + after Namenode startup. By default it's disabled. + In the case a directory has large number of directories and files are + deleted, suggested delay is one hour to give the administrator enough time + to notice large number of pending deletion blocks and take corrective + action. + + + + + dfs.journalnode.edits.dir + /hadoop/hdfs/journalnode + The path where the JournalNode daemon will store its local state. + + + + dfs.client.retry.policy.enabled + false + Enables HDFS client retry in the event of a NameNode failure. + + + + dfs.content-summary.limit + 5000 + Dfs content summary limit. + + + + dfs.encryption.key.provider.uri + + The KeyProvider to use when interacting with encryption keys used + when reading and writing to an encryption zone. + + + + true + + + + hadoop-env + keyserver_host + + + hadoop-env + keyserver_port + + + kms-env + kms_port + + + ranger-kms-site + ranger.service.https.attrib.ssl.enabled + + + + + + + + nfs.file.dump.dir + /tmp/.hdfs-nfs + NFSGateway dump directory + + This directory is used to temporarily save out-of-order writes before + writing to HDFS. For each file, the out-of-order writes are dumped after + they are accumulated to exceed certain threshold (e.g., 1MB) in memory. + One needs to make sure the directory has enough space. + + + directory + + + + + nfs.exports.allowed.hosts + * rw + + By default, the export can be mounted by any client. To better control the access, + users can update the following property. The value string contains machine name and access privilege, + separated by whitespace characters. Machine name format can be single host, wildcards, and IPv4 + networks.The access privilege uses rw or ro to specify readwrite or readonly access of the machines + to exports. If the access privilege is not provided, the default is read-only. Entries are separated + by ";". For example: "192.168.0.0/22 rw ; host*.example.com ; host1.test.org ro;". + + Allowed hosts + + + + dfs.encrypt.data.transfer.cipher.suites + AES/CTR/NoPadding + + This value may be either undefined or AES/CTR/NoPadding. If defined, then + dfs.encrypt.data.transfer uses the specified cipher suite for data encryption. + If not defined, then only the algorithm specified in dfs.encrypt.data.transfer.algorithm + is used. By default, the property is not defined. + + + + + dfs.namenode.inode.attributes.provider.class + Enable ranger hdfs plugin + + + ranger-hdfs-plugin-properties + ranger-hdfs-plugin-enabled + + + + false + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-audit.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-audit.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-audit.xml new file mode 100644 index 0000000..3dc46b3 --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-audit.xml @@ -0,0 +1,124 @@ + + + + + + + xasecure.audit.is.enabled + true + Is Audit enabled? + + + + xasecure.audit.destination.hdfs + true + Audit to HDFS + Is Audit to HDFS enabled? + + boolean + + + + ranger-env + xasecure.audit.destination.hdfs + + + + + + xasecure.audit.destination.hdfs.dir + hdfs://NAMENODE_HOSTNAME:8020/ranger/audit + HDFS folder to write audit to, make sure the service user has requried permissions + + + ranger-env + xasecure.audit.destination.hdfs.dir + + + + + + xasecure.audit.destination.hdfs.batch.filespool.dir + /var/log/hadoop/hdfs/audit/hdfs/spool + /var/log/hadoop/hdfs/audit/hdfs/spool + + + + xasecure.audit.destination.solr + false + Audit to SOLR + Is Solr audit enabled? + + boolean + + + + ranger-env + xasecure.audit.destination.solr + + + + + + xasecure.audit.destination.solr.urls + + Solr URL + + true + + + + ranger-admin-site + ranger.audit.solr.urls + + + + + + xasecure.audit.destination.solr.zookeepers + NONE + Solr Zookeeper string + + + ranger-admin-site + ranger.audit.solr.zookeepers + + + + + + xasecure.audit.destination.solr.batch.filespool.dir + /var/log/hadoop/hdfs/audit/solr/spool + /var/log/hadoop/hdfs/audit/solr/spool + + + + xasecure.audit.provider.summary.enabled + false + Audit provider summary enabled + Enable Summary audit? + + boolean + + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-plugin-properties.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-plugin-properties.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-plugin-properties.xml new file mode 100644 index 0000000..deede1c --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-plugin-properties.xml @@ -0,0 +1,88 @@ + + + + + + policy_user + ambari-qa + Policy user for HDFS + This user must be system user and also present at Ranger + admin portal + + + + common.name.for.certificate + + Common name for certificate, this value should match what is specified in repo within ranger admin + + true + + + + + ranger-hdfs-plugin-enabled + No + Enable Ranger for HDFS + Enable ranger hdfs plugin + + + ranger-env + ranger-hdfs-plugin-enabled + + + + boolean + false + + + + + REPOSITORY_CONFIG_USERNAME + hadoop + Ranger repository config user + Used for repository creation on ranger admin + + + + + REPOSITORY_CONFIG_PASSWORD + hadoop + Ranger repository config password + PASSWORD + Used for repository creation on ranger admin + + + password + + + + + + + hadoop.rpc.protection + authentication + Used for repository creation on ranger admin + + true + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/6e8d3458/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-policymgr-ssl.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-policymgr-ssl.xml b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-policymgr-ssl.xml new file mode 100644 index 0000000..081ec2d --- /dev/null +++ b/ambari-server/src/main/resources/stacks/PERF/1.0/services/HDFS/configuration/ranger-hdfs-policymgr-ssl.xml @@ -0,0 +1,67 @@ + + + + + + xasecure.policymgr.clientssl.keystore + /usr/hdp/current/hadoop-client/conf/ranger-plugin-keystore.jks + Java Keystore files + + + + xasecure.policymgr.clientssl.keystore.password + myKeyFilePassword + PASSWORD + password for keystore + + password + + + + + xasecure.policymgr.clientssl.truststore + /usr/hdp/current/hadoop-client/conf/ranger-plugin-truststore.jks + java truststore file + + + + xasecure.policymgr.clientssl.truststore.password + changeit + PASSWORD + java truststore password + + password + + + + + xasecure.policymgr.clientssl.keystore.credential.file + jceks://file{{credential_file}} + java keystore credential file + + + + xasecure.policymgr.clientssl.truststore.credential.file + jceks://file{{credential_file}} + java truststore credential file + + +