Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id CB181200BF8 for ; Fri, 13 Jan 2017 12:39:15 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id C9802160B3F; Fri, 13 Jan 2017 11:39:15 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 865C1160B32 for ; Fri, 13 Jan 2017 12:39:13 +0100 (CET) Received: (qmail 33973 invoked by uid 500); 13 Jan 2017 11:39:12 -0000 Mailing-List: contact commits-help@hawq.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hawq.incubator.apache.org Delivered-To: mailing list commits@hawq.incubator.apache.org Received: (qmail 33964 invoked by uid 99); 13 Jan 2017 11:39:12 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Jan 2017 11:39:12 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 082FDC062B for ; Fri, 13 Jan 2017 11:39:12 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -6.219 X-Spam-Level: X-Spam-Status: No, score=-6.219 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id Ewz-ZwmAYIUf for ; Fri, 13 Jan 2017 11:38:58 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with SMTP id 5268E5FBE1 for ; Fri, 13 Jan 2017 11:38:57 +0000 (UTC) Received: (qmail 33761 invoked by uid 99); 13 Jan 2017 11:38:56 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Jan 2017 11:38:56 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id AA652DFB93; Fri, 13 Jan 2017 11:38:56 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: huor@apache.org To: commits@hawq.incubator.apache.org Date: Fri, 13 Jan 2017 11:38:57 -0000 Message-Id: In-Reply-To: <6373197a060a49dc8ae09bcaa7e8c2cc@git.apache.org> References: <6373197a060a49dc8ae09bcaa7e8c2cc@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [2/3] incubator-hawq git commit: Revert "HAWQ-1248. Merge Dockerfiles for HAWQ Dev into HAWQ code base." archived-at: Fri, 13 Jan 2017 11:39:16 -0000 http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/log4j.properties ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/log4j.properties b/contrib/hawq-docker/centos6-docker/hawq-test/conf/log4j.properties deleted file mode 100644 index c901ab1..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/log4j.properties +++ /dev/null @@ -1,291 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Define some default values that can be overridden by system properties -hadoop.root.logger=INFO,console -hadoop.log.dir=. -hadoop.log.file=hadoop.log - -# Define the root logger to the system property "hadoop.root.logger". -log4j.rootLogger=${hadoop.root.logger}, EventCounter - -# Logging Threshold -log4j.threshold=ALL - -# Null Appender -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# Rolling File Appender - cap space usage at 5gb. -# -hadoop.log.maxfilesize=256MB -hadoop.log.maxbackupindex=20 -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} - -log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# Daily Rolling File Appender -# - -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} - -# Rollover at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# console -# Add "console" to rootlogger above if you want to use this -# - -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n - -# -# TaskLog Appender -# - -#Default values -hadoop.tasklog.taskid=null -hadoop.tasklog.iscleanup=false -hadoop.tasklog.noKeepSplits=4 -hadoop.tasklog.totalLogFileSize=100 -hadoop.tasklog.purgeLogSplits=true -hadoop.tasklog.logsRetainHours=12 - -log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender -log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} -log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} -log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} - -log4j.appender.TLA.layout=org.apache.log4j.PatternLayout -log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n - -# -# HDFS block state change log from block manager -# -# Uncomment the following to suppress normal block state change -# messages from BlockManager in NameNode. -#log4j.logger.BlockStateChange=WARN - -# -#Security appender -# -hadoop.security.logger=INFO,NullAppender -hadoop.security.log.maxfilesize=256MB -hadoop.security.log.maxbackupindex=20 -log4j.category.SecurityLogger=${hadoop.security.logger} -hadoop.security.log.file=SecurityAuth-${user.name}.audit -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} - -# -# Daily Rolling Security appender -# -log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd - -# -# hadoop configuration logging -# - -# Uncomment the following line to turn off configuration deprecation warnings. -# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN - -# -# hdfs audit logging -# -hdfs.audit.logger=INFO,NullAppender -hdfs.audit.log.maxfilesize=256MB -hdfs.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} -log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false -log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log -log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} -log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} - -# -# NameNode metrics logging. -# The default is to retain two namenode-metrics.log files up to 64MB each. -# -namenode.metrics.logger=INFO,NullAppender -log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger} -log4j.additivity.NameNodeMetricsLog=false -log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender -log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log -log4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout -log4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n -log4j.appender.NNMETRICSRFA.MaxBackupIndex=1 -log4j.appender.NNMETRICSRFA.MaxFileSize=64MB - -# -# mapred audit logging -# -mapred.audit.logger=INFO,NullAppender -mapred.audit.log.maxfilesize=256MB -mapred.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} -log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false -log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log -log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} -log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} - -# Custom Logging levels - -#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG -#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG -#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG - -# Jets3t library -log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR - -# AWS SDK & S3A FileSystem -log4j.logger.com.amazonaws=ERROR -log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR -log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN - -# -# Event Counter Appender -# Sends counts of logging messages at different severity levels to Hadoop Metrics. -# -log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# -# Job Summary Appender -# -# Use following logger to send summary to separate file defined by -# hadoop.mapreduce.jobsummary.log.file : -# hadoop.mapreduce.jobsummary.logger=INFO,JSA -# -hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} -hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log -hadoop.mapreduce.jobsummary.log.maxfilesize=256MB -hadoop.mapreduce.jobsummary.log.maxbackupindex=20 -log4j.appender.JSA=org.apache.log4j.RollingFileAppender -log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file} -log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize} -log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex} -log4j.appender.JSA.layout=org.apache.log4j.PatternLayout -log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n -log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger} -log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false - -# -# Yarn ResourceManager Application Summary Log -# -# Set the ResourceManager summary log filename -yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log -# Set the ResourceManager summary log level and appender -yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger} -#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY - -# To enable AppSummaryLogging for the RM, -# set yarn.server.resourcemanager.appsummary.logger to -# ,RMSUMMARY in hadoop-env.sh - -# Appender for ResourceManager Application Summary Log -# Requires the following properties to be set -# - hadoop.log.dir (Hadoop Log directory) -# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) -# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) - -log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} -log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false -log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender -log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file} -log4j.appender.RMSUMMARY.MaxFileSize=256MB -log4j.appender.RMSUMMARY.MaxBackupIndex=20 -log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout -log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n - -# HS audit log configs -#mapreduce.hs.audit.logger=INFO,HSAUDIT -#log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=${mapreduce.hs.audit.logger} -#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=false -#log4j.appender.HSAUDIT=org.apache.log4j.DailyRollingFileAppender -#log4j.appender.HSAUDIT.File=${hadoop.log.dir}/hs-audit.log -#log4j.appender.HSAUDIT.layout=org.apache.log4j.PatternLayout -#log4j.appender.HSAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -#log4j.appender.HSAUDIT.DatePattern=.yyyy-MM-dd - -# Http Server Request Logs -#log4j.logger.http.requests.namenode=INFO,namenoderequestlog -#log4j.appender.namenoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.namenoderequestlog.Filename=${hadoop.log.dir}/jetty-namenode-yyyy_mm_dd.log -#log4j.appender.namenoderequestlog.RetainDays=3 - -#log4j.logger.http.requests.datanode=INFO,datanoderequestlog -#log4j.appender.datanoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.datanoderequestlog.Filename=${hadoop.log.dir}/jetty-datanode-yyyy_mm_dd.log -#log4j.appender.datanoderequestlog.RetainDays=3 - -#log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog -#log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-yyyy_mm_dd.log -#log4j.appender.resourcemanagerrequestlog.RetainDays=3 - -#log4j.logger.http.requests.jobhistory=INFO,jobhistoryrequestlog -#log4j.appender.jobhistoryrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.jobhistoryrequestlog.Filename=${hadoop.log.dir}/jetty-jobhistory-yyyy_mm_dd.log -#log4j.appender.jobhistoryrequestlog.RetainDays=3 - -#log4j.logger.http.requests.nodemanager=INFO,nodemanagerrequestlog -#log4j.appender.nodemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.nodemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-nodemanager-yyyy_mm_dd.log -#log4j.appender.nodemanagerrequestlog.RetainDays=3 - -# Appender for viewing information for errors and warnings -yarn.ewma.cleanupInterval=300 -yarn.ewma.messageAgeLimitSeconds=86400 -yarn.ewma.maxUniqueMessages=250 -log4j.appender.EWMA=org.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppender -log4j.appender.EWMA.cleanupInterval=${yarn.ewma.cleanupInterval} -log4j.appender.EWMA.messageAgeLimitSeconds=${yarn.ewma.messageAgeLimitSeconds} -log4j.appender.EWMA.maxUniqueMessages=${yarn.ewma.maxUniqueMessages} http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.cmd ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.cmd b/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.cmd deleted file mode 100644 index 0d39526..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.cmd +++ /dev/null @@ -1,20 +0,0 @@ -@echo off -@rem Licensed to the Apache Software Foundation (ASF) under one or more -@rem contributor license agreements. See the NOTICE file distributed with -@rem this work for additional information regarding copyright ownership. -@rem The ASF licenses this file to You under the Apache License, Version 2.0 -@rem (the "License"); you may not use this file except in compliance with -@rem the License. You may obtain a copy of the License at -@rem -@rem http://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. - -set HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000 - -set HADOOP_MAPRED_ROOT_LOGGER=%HADOOP_LOGLEVEL%,RFA - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.sh b/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.sh deleted file mode 100644 index 6be1e27..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-env.sh +++ /dev/null @@ -1,27 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# export JAVA_HOME=/home/y/libexec/jdk1.6.0/ - -export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000 - -export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA - -#export HADOOP_JOB_HISTORYSERVER_OPTS= -#export HADOOP_MAPRED_LOG_DIR="" # Where log files are stored. $HADOOP_MAPRED_HOME/logs by default. -#export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger. -#export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default. -#export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default -#export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0. http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-queues.xml.template ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-queues.xml.template b/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-queues.xml.template deleted file mode 100644 index ce6cd20..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-queues.xml.template +++ /dev/null @@ -1,92 +0,0 @@ - - - - - - - - - - default - - - - - - - running - - - - - - - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-site.xml.template ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-site.xml.template b/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-site.xml.template deleted file mode 100644 index 761c352..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/mapred-site.xml.template +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/slaves ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/slaves b/contrib/hawq-docker/centos6-docker/hawq-test/conf/slaves deleted file mode 100644 index 2fbb50c..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/slaves +++ /dev/null @@ -1 +0,0 @@ -localhost http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-client.xml.example ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-client.xml.example b/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-client.xml.example deleted file mode 100644 index a50dce4..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-client.xml.example +++ /dev/null @@ -1,80 +0,0 @@ - - - - - - - ssl.client.truststore.location - - Truststore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.truststore.password - - Optional. Default value is "". - - - - - ssl.client.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.client.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - - ssl.client.keystore.location - - Keystore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.keystore.password - - Optional. Default value is "". - - - - - ssl.client.keystore.keypassword - - Optional. Default value is "". - - - - - ssl.client.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-server.xml.example ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-server.xml.example b/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-server.xml.example deleted file mode 100644 index 02d300c..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/ssl-server.xml.example +++ /dev/null @@ -1,78 +0,0 @@ - - - - - - - ssl.server.truststore.location - - Truststore to be used by NN and DN. Must be specified. - - - - - ssl.server.truststore.password - - Optional. Default value is "". - - - - - ssl.server.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.server.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - - ssl.server.keystore.location - - Keystore to be used by NN and DN. Must be specified. - - - - - ssl.server.keystore.password - - Must be specified. - - - - - ssl.server.keystore.keypassword - - Must be specified. - - - - - ssl.server.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/conf/yarn-env.cmd ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/conf/yarn-env.cmd b/contrib/hawq-docker/centos6-docker/hawq-test/conf/yarn-env.cmd deleted file mode 100644 index 74da35b..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/conf/yarn-env.cmd +++ /dev/null @@ -1,60 +0,0 @@ -@echo off -@rem Licensed to the Apache Software Foundation (ASF) under one or more -@rem contributor license agreements. See the NOTICE file distributed with -@rem this work for additional information regarding copyright ownership. -@rem The ASF licenses this file to You under the Apache License, Version 2.0 -@rem (the "License"); you may not use this file except in compliance with -@rem the License. You may obtain a copy of the License at -@rem -@rem http://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. - -@rem User for YARN daemons -if not defined HADOOP_YARN_USER ( - set HADOOP_YARN_USER=%yarn% -) - -if not defined YARN_CONF_DIR ( - set YARN_CONF_DIR=%HADOOP_YARN_HOME%\conf -) - -if defined YARN_HEAPSIZE ( - @rem echo run with Java heapsize %YARN_HEAPSIZE% - set JAVA_HEAP_MAX=-Xmx%YARN_HEAPSIZE%m -) - -if not defined YARN_LOG_DIR ( - set YARN_LOG_DIR=%HADOOP_YARN_HOME%\logs -) - -if not defined YARN_LOGFILE ( - set YARN_LOGFILE=yarn.log -) - -@rem default policy file for service-level authorization -if not defined YARN_POLICYFILE ( - set YARN_POLICYFILE=hadoop-policy.xml -) - -if not defined YARN_ROOT_LOGGER ( - set YARN_ROOT_LOGGER=%HADOOP_LOGLEVEL%,console -) - -set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.dir=%YARN_LOG_DIR% -set YARN_OPTS=%YARN_OPTS% -Dyarn.log.dir=%YARN_LOG_DIR% -set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.file=%YARN_LOGFILE% -set YARN_OPTS=%YARN_OPTS% -Dyarn.log.file=%YARN_LOGFILE% -set YARN_OPTS=%YARN_OPTS% -Dyarn.home.dir=%HADOOP_YARN_HOME% -set YARN_OPTS=%YARN_OPTS% -Dyarn.id.str=%YARN_IDENT_STRING% -set YARN_OPTS=%YARN_OPTS% -Dhadoop.home.dir=%HADOOP_YARN_HOME% -set YARN_OPTS=%YARN_OPTS% -Dhadoop.root.logger=%YARN_ROOT_LOGGER% -set YARN_OPTS=%YARN_OPTS% -Dyarn.root.logger=%YARN_ROOT_LOGGER% -if defined JAVA_LIBRARY_PATH ( - set YARN_OPTS=%YARN_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH% -) -set YARN_OPTS=%YARN_OPTS% -Dyarn.policy.file=%YARN_POLICYFILE% \ No newline at end of file http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/entrypoint.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/entrypoint.sh b/contrib/hawq-docker/centos6-docker/hawq-test/entrypoint.sh deleted file mode 100755 index 2c03287..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/entrypoint.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash - -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -if [ -z "${NAMENODE}" ]; then - export NAMENODE=${HOSTNAME} -fi - -if [ ! -f /etc/profile.d/hadoop.sh ]; then - echo '#!/bin/bash' | sudo tee /etc/profile.d/hadoop.sh - echo "export NAMENODE=${NAMENODE}" | sudo tee -a /etc/profile.d/hadoop.sh - sudo chmod a+x /etc/profile.d/hadoop.sh -fi - -sudo start-hdfs.sh -sudo sysctl -p -sudo ln -s /usr/lib/libthrift-0.9.1.so /usr/lib64/libthrift-0.9.1.so - -exec "$@" http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos6-docker/hawq-test/start-hdfs.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos6-docker/hawq-test/start-hdfs.sh b/contrib/hawq-docker/centos6-docker/hawq-test/start-hdfs.sh deleted file mode 100755 index 076fb0a..0000000 --- a/contrib/hawq-docker/centos6-docker/hawq-test/start-hdfs.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -/etc/init.d/sshd start - -if [ -f /etc/profile.d/hadoop.sh ]; then - . /etc/profile.d/hadoop.sh -fi - -if [ "${NAMENODE}" == "${HOSTNAME}" ]; then - if [ ! -d /tmp/hdfs/name/current ]; then - su -l hdfs -c "hdfs namenode -format" - fi - - if [ -z "`ps aux | grep org.apache.hadoop.hdfs.server.namenode.NameNode | grep -v grep`" ]; then - su -l hdfs -c "hadoop-daemon.sh start namenode" - fi -else - if [ -z "`ps aux | grep org.apache.hadoop.hdfs.server.datanode.DataNode | grep -v grep`" ]; then - su -l hdfs -c "hadoop-daemon.sh start datanode" - fi -fi - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-dev/Dockerfile ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-dev/Dockerfile b/contrib/hawq-docker/centos7-docker/hawq-dev/Dockerfile deleted file mode 100644 index 58d4ef0..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-dev/Dockerfile +++ /dev/null @@ -1,75 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -FROM centos:7 - -MAINTAINER Richard Guo - -# install all software we need -RUN yum install -y epel-release && \ - yum makecache && \ - yum install -y man passwd sudo tar which git mlocate links make bzip2 net-tools \ - autoconf automake libtool m4 gcc gcc-c++ gdb bison flex cmake gperf maven indent \ - libuuid-devel krb5-devel libgsasl-devel expat-devel libxml2-devel \ - perl-ExtUtils-Embed pam-devel python-devel libcurl-devel snappy-devel \ - thrift-devel libyaml-devel libevent-devel bzip2-devel openssl-devel \ - openldap-devel protobuf-devel readline-devel net-snmp-devel apr-devel \ - libesmtp-devel python-pip json-c-devel \ - java-1.7.0-openjdk-devel lcov cmake \ - openssh-clients openssh-server perl-JSON && \ - yum clean all - -RUN pip --retries=50 --timeout=300 install pycrypto - -# OS requirement -RUN echo "kernel.sem = 250 512000 100 2048" >> /etc/sysctl.conf - -# setup ssh server and keys for root -RUN sshd-keygen && \ - ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa && \ - cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys && \ - chmod 0600 ~/.ssh/authorized_keys - -# create user gpadmin since HAWQ cannot run under root -RUN groupadd -g 1000 gpadmin && \ - useradd -u 1000 -g 1000 gpadmin && \ - echo "gpadmin ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/gpadmin - -# sudo should not require tty -RUN sed -i -e 's|Defaults requiretty|#Defaults requiretty|' /etc/sudoers - -# setup JAVA_HOME for all users -RUN echo "#!/bin/sh" > /etc/profile.d/java.sh && \ - echo "export JAVA_HOME=/etc/alternatives/java_sdk" >> /etc/profile.d/java.sh && \ - chmod a+x /etc/profile.d/java.sh - -# set USER env -RUN echo "#!/bin/bash" > /etc/profile.d/user.sh && \ - echo "export USER=\`whoami\`" >> /etc/profile.d/user.sh && \ - chmod a+x /etc/profile.d/user.sh - -ENV BASEDIR /data -RUN mkdir -p /data && chmod 777 /data - -USER gpadmin - -# setup ssh client keys for gpadmin -RUN ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa && \ - cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys && \ - chmod 0600 ~/.ssh/authorized_keys - -WORKDIR /data http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/Dockerfile ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/Dockerfile b/contrib/hawq-docker/centos7-docker/hawq-test/Dockerfile deleted file mode 100644 index ea5e22c..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/Dockerfile +++ /dev/null @@ -1,40 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -FROM hawq/hawq-dev:centos7 - -MAINTAINER Richard Guo - -USER root - -## install HDP 2.5.0 -RUN curl -L "http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.0.0/hdp.repo" -o /etc/yum.repos.d/hdp.repo && \ - yum install -y hadoop hadoop-hdfs hadoop-libhdfs hadoop-yarn hadoop-mapreduce hadoop-client hdp-select && \ - yum clean all - -RUN ln -s /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh /usr/bin/hadoop-daemon.sh - -COPY conf/* /etc/hadoop/conf/ - -COPY entrypoint.sh /usr/bin/entrypoint.sh -COPY start-hdfs.sh /usr/bin/start-hdfs.sh - -USER gpadmin - -ENTRYPOINT ["entrypoint.sh"] -CMD ["bash"] - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/capacity-scheduler.xml ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/capacity-scheduler.xml b/contrib/hawq-docker/centos7-docker/hawq-test/conf/capacity-scheduler.xml deleted file mode 100644 index 30f4eb9..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/capacity-scheduler.xml +++ /dev/null @@ -1,134 +0,0 @@ - - - - - yarn.scheduler.capacity.maximum-applications - 10000 - - Maximum number of applications that can be pending and running. - - - - - yarn.scheduler.capacity.maximum-am-resource-percent - 0.1 - - Maximum percent of resources in the cluster which can be used to run - application masters i.e. controls number of concurrent running - applications. - - - - - yarn.scheduler.capacity.resource-calculator - org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator - - The ResourceCalculator implementation to be used to compare - Resources in the scheduler. - The default i.e. DefaultResourceCalculator only uses Memory while - DominantResourceCalculator uses dominant-resource to compare - multi-dimensional resources such as Memory, CPU etc. - - - - - yarn.scheduler.capacity.root.queues - default - - The queues at the this level (root is the root queue). - - - - - yarn.scheduler.capacity.root.default.capacity - 100 - Default queue target capacity. - - - - yarn.scheduler.capacity.root.default.user-limit-factor - 1 - - Default queue user limit a percentage from 0.0 to 1.0. - - - - - yarn.scheduler.capacity.root.default.maximum-capacity - 100 - - The maximum capacity of the default queue. - - - - - yarn.scheduler.capacity.root.default.state - RUNNING - - The state of the default queue. State can be one of RUNNING or STOPPED. - - - - - yarn.scheduler.capacity.root.default.acl_submit_applications - * - - The ACL of who can submit jobs to the default queue. - - - - - yarn.scheduler.capacity.root.default.acl_administer_queue - * - - The ACL of who can administer jobs on the default queue. - - - - - yarn.scheduler.capacity.node-locality-delay - 40 - - Number of missed scheduling opportunities after which the CapacityScheduler - attempts to schedule rack-local containers. - Typically this should be set to number of nodes in the cluster, By default is setting - approximately number of nodes in one rack which is 40. - - - - - yarn.scheduler.capacity.queue-mappings - - - A list of mappings that will be used to assign jobs to queues - The syntax for this list is [u|g]:[name]:[queue_name][,next mapping]* - Typically this list will be used to map users to queues, - for example, u:%user:%user maps all users to queues with the same name - as the user. - - - - - yarn.scheduler.capacity.queue-mappings-override.enable - false - - If a queue mapping is present, will it override the value specified - by the user? This can be used by administrators to place jobs in queues - that are different than the one specified by the user. - The default is false. - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/configuration.xsl ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/configuration.xsl b/contrib/hawq-docker/centos7-docker/hawq-test/conf/configuration.xsl deleted file mode 100644 index d50d80b..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/configuration.xsl +++ /dev/null @@ -1,40 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -
namevaluedescription
- - -
-
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/container-executor.cfg ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/container-executor.cfg b/contrib/hawq-docker/centos7-docker/hawq-test/conf/container-executor.cfg deleted file mode 100644 index d68cee8..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/container-executor.cfg +++ /dev/null @@ -1,4 +0,0 @@ -yarn.nodemanager.linux-container-executor.group=#configured value of yarn.nodemanager.linux-container-executor.group -banned.users=#comma separated list of users who can not run applications -min.user.id=1000#Prevent other super-users -allowed.system.users=##comma separated list of system users who CAN run applications http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/core-site.xml ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/core-site.xml b/contrib/hawq-docker/centos7-docker/hawq-test/conf/core-site.xml deleted file mode 100644 index afc37fc..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/core-site.xml +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - - - - fs.defaultFS - hdfs://${hdfs.namenode}:8020 - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.cmd ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.cmd b/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.cmd deleted file mode 100644 index bb40ec9..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.cmd +++ /dev/null @@ -1,92 +0,0 @@ -@echo off -@rem Licensed to the Apache Software Foundation (ASF) under one or more -@rem contributor license agreements. See the NOTICE file distributed with -@rem this work for additional information regarding copyright ownership. -@rem The ASF licenses this file to You under the Apache License, Version 2.0 -@rem (the "License"); you may not use this file except in compliance with -@rem the License. You may obtain a copy of the License at -@rem -@rem http://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. - -@rem Set Hadoop-specific environment variables here. - -@rem The only required environment variable is JAVA_HOME. All others are -@rem optional. When running a distributed configuration it is best to -@rem set JAVA_HOME in this file, so that it is correctly defined on -@rem remote nodes. - -@rem The java implementation to use. Required. -set JAVA_HOME=%JAVA_HOME% - -@rem The jsvc implementation to use. Jsvc is required to run secure datanodes. -@rem set JSVC_HOME=%JSVC_HOME% - -@rem set HADOOP_CONF_DIR= - -@rem Extra Java CLASSPATH elements. Automatically insert capacity-scheduler. -if exist %HADOOP_HOME%\contrib\capacity-scheduler ( - if not defined HADOOP_CLASSPATH ( - set HADOOP_CLASSPATH=%HADOOP_HOME%\contrib\capacity-scheduler\*.jar - ) else ( - set HADOOP_CLASSPATH=%HADOOP_CLASSPATH%;%HADOOP_HOME%\contrib\capacity-scheduler\*.jar - ) -) - -@rem If TEZ_CLASSPATH is defined in the env, that means that TEZ is enabled -@rem append it to the HADOOP_CLASSPATH - -if defined TEZ_CLASSPATH ( - if not defined HADOOP_CLASSPATH ( - set HADOOP_CLASSPATH=%TEZ_CLASSPATH% - ) else ( - set HADOOP_CLASSPATH=%HADOOP_CLASSPATH%;%TEZ_CLASSPATH% - ) -) - -@rem The maximum amount of heap to use, in MB. Default is 1000. -@rem set HADOOP_HEAPSIZE= -@rem set HADOOP_NAMENODE_INIT_HEAPSIZE="" - -@rem Extra Java runtime options. Empty by default. -@rem set HADOOP_OPTS=%HADOOP_OPTS% -Djava.net.preferIPv4Stack=true - -@rem Command specific options appended to HADOOP_OPTS when specified -if not defined HADOOP_SECURITY_LOGGER ( - set HADOOP_SECURITY_LOGGER=INFO,RFAS -) -if not defined HDFS_AUDIT_LOGGER ( - set HDFS_AUDIT_LOGGER=INFO,NullAppender -) - -set HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER% -Dhdfs.audit.logger=%HDFS_AUDIT_LOGGER% %HADOOP_NAMENODE_OPTS% -set HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS %HADOOP_DATANODE_OPTS% -set HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER% -Dhdfs.audit.logger=%HDFS_AUDIT_LOGGER% %HADOOP_SECONDARYNAMENODE_OPTS% - -@rem The following applies to multiple commands (fs, dfs, fsck, distcp etc) -set HADOOP_CLIENT_OPTS=-Xmx512m %HADOOP_CLIENT_OPTS% -@rem set HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData %HADOOP_JAVA_PLATFORM_OPTS%" - -@rem On secure datanodes, user to run the datanode as after dropping privileges -set HADOOP_SECURE_DN_USER=%HADOOP_SECURE_DN_USER% - -@rem Where log files are stored. %HADOOP_HOME%/logs by default. -@rem set HADOOP_LOG_DIR=%HADOOP_LOG_DIR%\%USERNAME% - -@rem Where log files are stored in the secure data environment. -set HADOOP_SECURE_DN_LOG_DIR=%HADOOP_LOG_DIR%\%HADOOP_HDFS_USER% - -@rem The directory where pid files are stored. /tmp by default. -@rem NOTE: this should be set to a directory that can only be written to by -@rem the user that will run the hadoop daemons. Otherwise there is the -@rem potential for a symlink attack. -set HADOOP_PID_DIR=%HADOOP_PID_DIR% -set HADOOP_SECURE_DN_PID_DIR=%HADOOP_PID_DIR% - -@rem A string representing this instance of hadoop. %USERNAME% by default. -set HADOOP_IDENT_STRING=%USERNAME% http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.sh b/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.sh deleted file mode 100644 index 95511ed..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-env.sh +++ /dev/null @@ -1,110 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Set Hadoop-specific environment variables here. - -# The only required environment variable is JAVA_HOME. All others are -# optional. When running a distributed configuration it is best to -# set JAVA_HOME in this file, so that it is correctly defined on -# remote nodes. - -# The java implementation to use. -export JAVA_HOME=/etc/alternatives/java_sdk - -# The jsvc implementation to use. Jsvc is required to run secure datanodes -# that bind to privileged ports to provide authentication of data transfer -# protocol. Jsvc is not required if SASL is configured for authentication of -# data transfer protocol using non-privileged ports. -#export JSVC_HOME=${JSVC_HOME} - -#export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"} - -# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler. -#for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do -# if [ "$HADOOP_CLASSPATH" ]; then -# export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f -# else -# export HADOOP_CLASSPATH=$f -# fi -#done - -# The maximum amount of heap to use, in MB. Default is 1000. -#export HADOOP_HEAPSIZE= -#export HADOOP_NAMENODE_INIT_HEAPSIZE="" - -# Setup environment variable for docker image -if [ -f /etc/profile.d/hadoop.sh ]; then - . /etc/profile.d/hadoop.sh -fi - -if [ -z "${NAMENODE}" ]; then - echo "environment variable NAMENODE is not set!" - exit 1 -fi - -# Extra Java runtime options. Empty by default. -export HADOOP_OPTS="-Dhdfs.namenode=${NAMENODE}" -#export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true" - -# Command specific options appended to HADOOP_OPTS when specified -#export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" -#export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS" - -#export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS" - -#export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS" -#export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS" - -# The following applies to multiple commands (fs, dfs, fsck, distcp etc) -#export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS" -#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS" - -# On secure datanodes, user to run the datanode as after dropping privileges. -# This **MUST** be uncommented to enable secure HDFS if using privileged ports -# to provide authentication of data transfer protocol. This **MUST NOT** be -# defined if SASL is configured for authentication of data transfer protocol -# using non-privileged ports. -#export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER} - -# Where log files are stored. $HADOOP_HOME/logs by default. -export HADOOP_LOG_DIR=/var/log/hadoop -export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER - -# Where log files are stored in the secure data environment. -#export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER} - -### -# HDFS Mover specific parameters -### -# Specify the JVM options to be used when starting the HDFS Mover. -# These options will be appended to the options specified as HADOOP_OPTS -# and therefore may override any similar flags set in HADOOP_OPTS -# -# export HADOOP_MOVER_OPTS="" - -### -# Advanced Users Only! -### - -# The directory where pid files are stored. /tmp by default. -# NOTE: this should be set to a directory that can only be written to by -# the user that will run the hadoop daemons. Otherwise there is the -# potential for a symlink attack. -#export HADOOP_PID_DIR=${HADOOP_PID_DIR} -#export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR} - -# A string representing this instance of hadoop. $USER by default. -#export HADOOP_IDENT_STRING=$USER http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics.properties ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics.properties b/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics.properties deleted file mode 100644 index c1b2eb7..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics.properties +++ /dev/null @@ -1,75 +0,0 @@ -# Configuration of the "dfs" context for null -dfs.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "dfs" context for file -#dfs.class=org.apache.hadoop.metrics.file.FileContext -#dfs.period=10 -#dfs.fileName=/tmp/dfsmetrics.log - -# Configuration of the "dfs" context for ganglia -# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) -# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# dfs.period=10 -# dfs.servers=localhost:8649 - - -# Configuration of the "mapred" context for null -mapred.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "mapred" context for file -#mapred.class=org.apache.hadoop.metrics.file.FileContext -#mapred.period=10 -#mapred.fileName=/tmp/mrmetrics.log - -# Configuration of the "mapred" context for ganglia -# Pick one: Ganglia 3.0 (former) or Ganglia 3.1 (latter) -# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# mapred.period=10 -# mapred.servers=localhost:8649 - - -# Configuration of the "jvm" context for null -#jvm.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "jvm" context for file -#jvm.class=org.apache.hadoop.metrics.file.FileContext -#jvm.period=10 -#jvm.fileName=/tmp/jvmmetrics.log - -# Configuration of the "jvm" context for ganglia -# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# jvm.period=10 -# jvm.servers=localhost:8649 - -# Configuration of the "rpc" context for null -rpc.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "rpc" context for file -#rpc.class=org.apache.hadoop.metrics.file.FileContext -#rpc.period=10 -#rpc.fileName=/tmp/rpcmetrics.log - -# Configuration of the "rpc" context for ganglia -# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# rpc.period=10 -# rpc.servers=localhost:8649 - - -# Configuration of the "ugi" context for null -ugi.class=org.apache.hadoop.metrics.spi.NullContext - -# Configuration of the "ugi" context for file -#ugi.class=org.apache.hadoop.metrics.file.FileContext -#ugi.period=10 -#ugi.fileName=/tmp/ugimetrics.log - -# Configuration of the "ugi" context for ganglia -# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext -# ugi.class=org.apache.hadoop.metrics.ganglia.GangliaContext31 -# ugi.period=10 -# ugi.servers=localhost:8649 - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics2.properties ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics2.properties b/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics2.properties deleted file mode 100644 index 0c09228..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-metrics2.properties +++ /dev/null @@ -1,68 +0,0 @@ -# syntax: [prefix].[source|sink].[instance].[options] -# See javadoc of package-info.java for org.apache.hadoop.metrics2 for details - -*.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink -# default sampling period, in seconds -*.period=10 - -# The namenode-metrics.out will contain metrics from all context -#namenode.sink.file.filename=namenode-metrics.out -# Specifying a special sampling period for namenode: -#namenode.sink.*.period=8 - -#datanode.sink.file.filename=datanode-metrics.out - -#resourcemanager.sink.file.filename=resourcemanager-metrics.out - -#nodemanager.sink.file.filename=nodemanager-metrics.out - -#mrappmaster.sink.file.filename=mrappmaster-metrics.out - -#jobhistoryserver.sink.file.filename=jobhistoryserver-metrics.out - -# the following example split metrics of different -# context to different sinks (in this case files) -#nodemanager.sink.file_jvm.class=org.apache.hadoop.metrics2.sink.FileSink -#nodemanager.sink.file_jvm.context=jvm -#nodemanager.sink.file_jvm.filename=nodemanager-jvm-metrics.out -#nodemanager.sink.file_mapred.class=org.apache.hadoop.metrics2.sink.FileSink -#nodemanager.sink.file_mapred.context=mapred -#nodemanager.sink.file_mapred.filename=nodemanager-mapred-metrics.out - -# -# Below are for sending metrics to Ganglia -# -# for Ganglia 3.0 support -# *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30 -# -# for Ganglia 3.1 support -# *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31 - -# *.sink.ganglia.period=10 - -# default for supportsparse is false -# *.sink.ganglia.supportsparse=true - -#*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both -#*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40 - -# Tag values to use for the ganglia prefix. If not defined no tags are used. -# If '*' all tags are used. If specifiying multiple tags separate them with -# commas. Note that the last segment of the property name is the context name. -# -#*.sink.ganglia.tagsForPrefix.jvm=ProcesName -#*.sink.ganglia.tagsForPrefix.dfs= -#*.sink.ganglia.tagsForPrefix.rpc= -#*.sink.ganglia.tagsForPrefix.mapred= - -#namenode.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 - -#datanode.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 - -#resourcemanager.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 - -#nodemanager.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 - -#mrappmaster.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 - -#jobhistoryserver.sink.ganglia.servers=yourgangliahost_1:8649,yourgangliahost_2:8649 http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-policy.xml ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-policy.xml b/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-policy.xml deleted file mode 100644 index 2bf5c02..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hadoop-policy.xml +++ /dev/null @@ -1,226 +0,0 @@ - - - - - - - - - security.client.protocol.acl - * - ACL for ClientProtocol, which is used by user code - via the DistributedFileSystem. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.client.datanode.protocol.acl - * - ACL for ClientDatanodeProtocol, the client-to-datanode protocol - for block recovery. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.datanode.protocol.acl - * - ACL for DatanodeProtocol, which is used by datanodes to - communicate with the namenode. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.inter.datanode.protocol.acl - * - ACL for InterDatanodeProtocol, the inter-datanode protocol - for updating generation timestamp. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.namenode.protocol.acl - * - ACL for NamenodeProtocol, the protocol used by the secondary - namenode to communicate with the namenode. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.admin.operations.protocol.acl - * - ACL for AdminOperationsProtocol. Used for admin commands. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.refresh.user.mappings.protocol.acl - * - ACL for RefreshUserMappingsProtocol. Used to refresh - users mappings. The ACL is a comma-separated list of user and - group names. The user and group list is separated by a blank. For - e.g. "alice,bob users,wheel". A special value of "*" means all - users are allowed. - - - - security.refresh.policy.protocol.acl - * - ACL for RefreshAuthorizationPolicyProtocol, used by the - dfsadmin and mradmin commands to refresh the security policy in-effect. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.ha.service.protocol.acl - * - ACL for HAService protocol used by HAAdmin to manage the - active and stand-by states of namenode. - - - - security.zkfc.protocol.acl - * - ACL for access to the ZK Failover Controller - - - - - security.qjournal.service.protocol.acl - * - ACL for QJournalProtocol, used by the NN to communicate with - JNs when using the QuorumJournalManager for edit logs. - - - - security.mrhs.client.protocol.acl - * - ACL for HSClientProtocol, used by job clients to - communciate with the MR History Server job status etc. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - - - security.resourcetracker.protocol.acl - * - ACL for ResourceTrackerProtocol, used by the - ResourceManager and NodeManager to communicate with each other. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.resourcemanager-administration.protocol.acl - * - ACL for ResourceManagerAdministrationProtocol, for admin commands. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.applicationclient.protocol.acl - * - ACL for ApplicationClientProtocol, used by the ResourceManager - and applications submission clients to communicate with each other. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.applicationmaster.protocol.acl - * - ACL for ApplicationMasterProtocol, used by the ResourceManager - and ApplicationMasters to communicate with each other. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.containermanagement.protocol.acl - * - ACL for ContainerManagementProtocol protocol, used by the NodeManager - and ApplicationMasters to communicate with each other. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.resourcelocalizer.protocol.acl - * - ACL for ResourceLocalizer protocol, used by the NodeManager - and ResourceLocalizer to communicate with each other. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.job.task.protocol.acl - * - ACL for TaskUmbilicalProtocol, used by the map and reduce - tasks to communicate with the parent tasktracker. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.job.client.protocol.acl - * - ACL for MRClientProtocol, used by job clients to - communciate with the MR ApplicationMaster to query job status etc. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - - - security.applicationhistory.protocol.acl - * - ACL for ApplicationHistoryProtocol, used by the timeline - server and the generic history service client to communicate with each other. - The ACL is a comma-separated list of user and group names. The user and - group list is separated by a blank. For e.g. "alice,bob users,wheel". - A special value of "*" means all users are allowed. - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hdfs-site.xml b/contrib/hawq-docker/centos7-docker/hawq-test/conf/hdfs-site.xml deleted file mode 100644 index 3f4f152..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/hdfs-site.xml +++ /dev/null @@ -1,100 +0,0 @@ - - - - - - - - - dfs.name.dir - /tmp/hdfs/name - true - - - - dfs.data.dir - /tmp/hdfs/data - true - - - - dfs.permissions - true - - - - dfs.support.append - true - - - - dfs.block.local-path-access.user - ${user.name} - - - - dfs.replication - 3 - - - - dfs.datanode.socket.write.timeout - 0 - - used for sockets to and from datanodes. It is 8 minutes by default. Some - users set this to 0, effectively disabling the write timeout. - - - - - dfs.webhdfs.enabled - true - - - - dfs.allow.truncate - true - - - - dfs.namenode.fs-limits.min-block-size - 1024 - - - - dfs.client.read.shortcircuit - true - - - - dfs.domain.socket.path - /var/lib/hadoop-hdfs/dn_socket - - - - dfs.block.access.token.enable - true - - If "true", access tokens are used as capabilities for accessing - datanodes. - If "false", no access tokens are checked on accessing datanodes. - - - - - dfs.namenode.datanode.registration.ip-hostname-check - false - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-acls.xml ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-acls.xml b/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-acls.xml deleted file mode 100644 index cba69f4..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-acls.xml +++ /dev/null @@ -1,135 +0,0 @@ - - - - - - - - - - hadoop.kms.acl.CREATE - * - - ACL for create-key operations. - If the user is not in the GET ACL, the key material is not returned - as part of the response. - - - - - hadoop.kms.acl.DELETE - * - - ACL for delete-key operations. - - - - - hadoop.kms.acl.ROLLOVER - * - - ACL for rollover-key operations. - If the user is not in the GET ACL, the key material is not returned - as part of the response. - - - - - hadoop.kms.acl.GET - * - - ACL for get-key-version and get-current-key operations. - - - - - hadoop.kms.acl.GET_KEYS - * - - ACL for get-keys operations. - - - - - hadoop.kms.acl.GET_METADATA - * - - ACL for get-key-metadata and get-keys-metadata operations. - - - - - hadoop.kms.acl.SET_KEY_MATERIAL - * - - Complementary ACL for CREATE and ROLLOVER operations to allow the client - to provide the key material when creating or rolling a key. - - - - - hadoop.kms.acl.GENERATE_EEK - * - - ACL for generateEncryptedKey CryptoExtension operations. - - - - - hadoop.kms.acl.DECRYPT_EEK - * - - ACL for decryptEncryptedKey CryptoExtension operations. - - - - - default.key.acl.MANAGEMENT - * - - default ACL for MANAGEMENT operations for all key acls that are not - explicitly defined. - - - - - default.key.acl.GENERATE_EEK - * - - default ACL for GENERATE_EEK operations for all key acls that are not - explicitly defined. - - - - - default.key.acl.DECRYPT_EEK - * - - default ACL for DECRYPT_EEK operations for all key acls that are not - explicitly defined. - - - - - default.key.acl.READ - * - - default ACL for READ operations for all key acls that are not - explicitly defined. - - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-env.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-env.sh b/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-env.sh deleted file mode 100644 index 44dfe6a..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-env.sh +++ /dev/null @@ -1,55 +0,0 @@ -#!/bin/bash -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. See accompanying LICENSE file. -# - -# Set kms specific environment variables here. - -# Settings for the Embedded Tomcat that runs KMS -# Java System properties for KMS should be specified in this variable -# -# export CATALINA_OPTS= - -# KMS logs directory -# -# export KMS_LOG=${KMS_HOME}/logs - -# KMS temporary directory -# -# export KMS_TEMP=${KMS_HOME}/temp - -# The HTTP port used by KMS -# -# export KMS_HTTP_PORT=16000 - -# The Admin port used by KMS -# -# export KMS_ADMIN_PORT=`expr ${KMS_HTTP_PORT} + 1` - -# The maximum number of Tomcat handler threads -# -# export KMS_MAX_THREADS=1000 - -# The location of the SSL keystore if using SSL -# -# export KMS_SSL_KEYSTORE_FILE=${HOME}/.keystore - -# The password of the SSL keystore if using SSL -# -# export KMS_SSL_KEYSTORE_PASS=password - -# The full path to any native libraries that need to be loaded -# (For eg. location of natively compiled tomcat Apache portable -# runtime (APR) libraries -# -# export JAVA_LIBRARY_PATH=${HOME}/lib/native http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/37d5c28c/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-log4j.properties ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-log4j.properties b/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-log4j.properties deleted file mode 100644 index 8e6d909..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-log4j.properties +++ /dev/null @@ -1,38 +0,0 @@ -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. See accompanying LICENSE file. -# - -# If the Java System property 'kms.log.dir' is not defined at KMS start up time -# Setup sets its value to '${kms.home}/logs' - -log4j.appender.kms=org.apache.log4j.DailyRollingFileAppender -log4j.appender.kms.DatePattern='.'yyyy-MM-dd -log4j.appender.kms.File=${kms.log.dir}/kms.log -log4j.appender.kms.Append=true -log4j.appender.kms.layout=org.apache.log4j.PatternLayout -log4j.appender.kms.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%n - -log4j.appender.kms-audit=org.apache.log4j.DailyRollingFileAppender -log4j.appender.kms-audit.DatePattern='.'yyyy-MM-dd -log4j.appender.kms-audit.File=${kms.log.dir}/kms-audit.log -log4j.appender.kms-audit.Append=true -log4j.appender.kms-audit.layout=org.apache.log4j.PatternLayout -log4j.appender.kms-audit.layout.ConversionPattern=%d{ISO8601} %m%n - -log4j.logger.kms-audit=INFO, kms-audit -log4j.additivity.kms-audit=false - -log4j.rootLogger=ALL, kms -log4j.logger.org.apache.hadoop.conf=ERROR -log4j.logger.org.apache.hadoop=INFO -log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF \ No newline at end of file