Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 114F8200C1F for ; Fri, 3 Feb 2017 10:00:19 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 0FE7C160B64; Fri, 3 Feb 2017 09:00:19 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 6CF5E160B55 for ; Fri, 3 Feb 2017 10:00:17 +0100 (CET) Received: (qmail 15073 invoked by uid 500); 3 Feb 2017 09:00:16 -0000 Mailing-List: contact commits-help@hawq.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hawq.incubator.apache.org Delivered-To: mailing list commits@hawq.incubator.apache.org Received: (qmail 15064 invoked by uid 99); 3 Feb 2017 09:00:16 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 03 Feb 2017 09:00:16 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 287251A0617 for ; Fri, 3 Feb 2017 09:00:16 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -6.219 X-Spam-Level: X-Spam-Status: No, score=-6.219 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id Hx3-QcTRxxYi for ; Fri, 3 Feb 2017 09:00:09 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id A8D1360CFB for ; Fri, 3 Feb 2017 09:00:04 +0000 (UTC) Received: (qmail 12206 invoked by uid 99); 3 Feb 2017 09:00:03 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 03 Feb 2017 09:00:03 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id B3B2FF1740; Fri, 3 Feb 2017 09:00:03 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: espino@apache.org To: commits@hawq.incubator.apache.org Date: Fri, 03 Feb 2017 09:00:22 -0000 Message-Id: In-Reply-To: <40325a477040484aa0872e11ff2aa29a@git.apache.org> References: <40325a477040484aa0872e11ff2aa29a@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [20/50] [abbrv] incubator-hawq git commit: Revert "HAWQ-1248. Merge Dockerfiles for HAWQ Dev into HAWQ code base." archived-at: Fri, 03 Feb 2017 09:00:19 -0000 http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-site.xml ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-site.xml b/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-site.xml deleted file mode 100644 index a810ca4..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/kms-site.xml +++ /dev/null @@ -1,173 +0,0 @@ - - - - - - - - hadoop.kms.key.provider.uri - jceks://file@/${user.home}/kms.keystore - - URI of the backing KeyProvider for the KMS. - - - - - hadoop.security.keystore.JavaKeyStoreProvider.password - none - - If using the JavaKeyStoreProvider, the password for the keystore file. - - - - - - - hadoop.kms.cache.enable - true - - Whether the KMS will act as a cache for the backing KeyProvider. - When the cache is enabled, operations like getKeyVersion, getMetadata, - and getCurrentKey will sometimes return cached data without consulting - the backing KeyProvider. Cached values are flushed when keys are deleted - or modified. - - - - - hadoop.kms.cache.timeout.ms - 600000 - - Expiry time for the KMS key version and key metadata cache, in - milliseconds. This affects getKeyVersion and getMetadata. - - - - - hadoop.kms.current.key.cache.timeout.ms - 30000 - - Expiry time for the KMS current key cache, in milliseconds. This - affects getCurrentKey operations. - - - - - - - hadoop.kms.audit.aggregation.window.ms - 10000 - - Duplicate audit log events within the aggregation window (specified in - ms) are quashed to reduce log traffic. A single message for aggregated - events is printed at the end of the window, along with a count of the - number of aggregated events. - - - - - - - hadoop.kms.authentication.type - simple - - Authentication type for the KMS. Can be either "simple" - or "kerberos". - - - - - hadoop.kms.authentication.kerberos.keytab - ${user.home}/kms.keytab - - Path to the keytab with credentials for the configured Kerberos principal. - - - - - hadoop.kms.authentication.kerberos.principal - HTTP/localhost - - The Kerberos principal to use for the HTTP endpoint. - The principal must start with 'HTTP/' as per the Kerberos HTTP SPNEGO specification. - - - - - hadoop.kms.authentication.kerberos.name.rules - DEFAULT - - Rules used to resolve Kerberos principal names. - - - - - - - hadoop.kms.authentication.signer.secret.provider - random - - Indicates how the secret to sign the authentication cookies will be - stored. Options are 'random' (default), 'string' and 'zookeeper'. - If using a setup with multiple KMS instances, 'zookeeper' should be used. - - - - - - - hadoop.kms.authentication.signer.secret.provider.zookeeper.path - /hadoop-kms/hadoop-auth-signature-secret - - The Zookeeper ZNode path where the KMS instances will store and retrieve - the secret from. - - - - - hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string - #HOSTNAME#:#PORT#,... - - The Zookeeper connection string, a list of hostnames and port comma - separated. - - - - - hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type - kerberos - - The Zookeeper authentication type, 'none' or 'sasl' (Kerberos). - - - - - hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab - /etc/hadoop/conf/kms.keytab - - The absolute path for the Kerberos keytab with the credentials to - connect to Zookeeper. - - - - - hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal - kms/#HOSTNAME# - - The Kerberos service principal used to connect to Zookeeper. - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/log4j.properties ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/log4j.properties b/contrib/hawq-docker/centos7-docker/hawq-test/conf/log4j.properties deleted file mode 100644 index c901ab1..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/log4j.properties +++ /dev/null @@ -1,291 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# Define some default values that can be overridden by system properties -hadoop.root.logger=INFO,console -hadoop.log.dir=. -hadoop.log.file=hadoop.log - -# Define the root logger to the system property "hadoop.root.logger". -log4j.rootLogger=${hadoop.root.logger}, EventCounter - -# Logging Threshold -log4j.threshold=ALL - -# Null Appender -log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender - -# -# Rolling File Appender - cap space usage at 5gb. -# -hadoop.log.maxfilesize=256MB -hadoop.log.maxbackupindex=20 -log4j.appender.RFA=org.apache.log4j.RollingFileAppender -log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} - -log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize} -log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex} - -log4j.appender.RFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# Daily Rolling File Appender -# - -log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} - -# Rollover at midnight -log4j.appender.DRFA.DatePattern=.yyyy-MM-dd - -log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout - -# Pattern format: Date LogLevel LoggerName LogMessage -log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -# Debugging Pattern format -#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n - - -# -# console -# Add "console" to rootlogger above if you want to use this -# - -log4j.appender.console=org.apache.log4j.ConsoleAppender -log4j.appender.console.target=System.err -log4j.appender.console.layout=org.apache.log4j.PatternLayout -log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n - -# -# TaskLog Appender -# - -#Default values -hadoop.tasklog.taskid=null -hadoop.tasklog.iscleanup=false -hadoop.tasklog.noKeepSplits=4 -hadoop.tasklog.totalLogFileSize=100 -hadoop.tasklog.purgeLogSplits=true -hadoop.tasklog.logsRetainHours=12 - -log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender -log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} -log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} -log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} - -log4j.appender.TLA.layout=org.apache.log4j.PatternLayout -log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n - -# -# HDFS block state change log from block manager -# -# Uncomment the following to suppress normal block state change -# messages from BlockManager in NameNode. -#log4j.logger.BlockStateChange=WARN - -# -#Security appender -# -hadoop.security.logger=INFO,NullAppender -hadoop.security.log.maxfilesize=256MB -hadoop.security.log.maxbackupindex=20 -log4j.category.SecurityLogger=${hadoop.security.logger} -hadoop.security.log.file=SecurityAuth-${user.name}.audit -log4j.appender.RFAS=org.apache.log4j.RollingFileAppender -log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} -log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} - -# -# Daily Rolling Security appender -# -log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender -log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} -log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout -log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n -log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd - -# -# hadoop configuration logging -# - -# Uncomment the following line to turn off configuration deprecation warnings. -# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN - -# -# hdfs audit logging -# -hdfs.audit.logger=INFO,NullAppender -hdfs.audit.log.maxfilesize=256MB -hdfs.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} -log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false -log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log -log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize} -log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex} - -# -# NameNode metrics logging. -# The default is to retain two namenode-metrics.log files up to 64MB each. -# -namenode.metrics.logger=INFO,NullAppender -log4j.logger.NameNodeMetricsLog=${namenode.metrics.logger} -log4j.additivity.NameNodeMetricsLog=false -log4j.appender.NNMETRICSRFA=org.apache.log4j.RollingFileAppender -log4j.appender.NNMETRICSRFA.File=${hadoop.log.dir}/namenode-metrics.log -log4j.appender.NNMETRICSRFA.layout=org.apache.log4j.PatternLayout -log4j.appender.NNMETRICSRFA.layout.ConversionPattern=%d{ISO8601} %m%n -log4j.appender.NNMETRICSRFA.MaxBackupIndex=1 -log4j.appender.NNMETRICSRFA.MaxFileSize=64MB - -# -# mapred audit logging -# -mapred.audit.logger=INFO,NullAppender -mapred.audit.log.maxfilesize=256MB -mapred.audit.log.maxbackupindex=20 -log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} -log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false -log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender -log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log -log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout -log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize} -log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex} - -# Custom Logging levels - -#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG -#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG -#log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=DEBUG - -# Jets3t library -log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR - -# AWS SDK & S3A FileSystem -log4j.logger.com.amazonaws=ERROR -log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR -log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN - -# -# Event Counter Appender -# Sends counts of logging messages at different severity levels to Hadoop Metrics. -# -log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter - -# -# Job Summary Appender -# -# Use following logger to send summary to separate file defined by -# hadoop.mapreduce.jobsummary.log.file : -# hadoop.mapreduce.jobsummary.logger=INFO,JSA -# -hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger} -hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log -hadoop.mapreduce.jobsummary.log.maxfilesize=256MB -hadoop.mapreduce.jobsummary.log.maxbackupindex=20 -log4j.appender.JSA=org.apache.log4j.RollingFileAppender -log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file} -log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize} -log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex} -log4j.appender.JSA.layout=org.apache.log4j.PatternLayout -log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n -log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger} -log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false - -# -# Yarn ResourceManager Application Summary Log -# -# Set the ResourceManager summary log filename -yarn.server.resourcemanager.appsummary.log.file=rm-appsummary.log -# Set the ResourceManager summary log level and appender -yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger} -#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY - -# To enable AppSummaryLogging for the RM, -# set yarn.server.resourcemanager.appsummary.logger to -# ,RMSUMMARY in hadoop-env.sh - -# Appender for ResourceManager Application Summary Log -# Requires the following properties to be set -# - hadoop.log.dir (Hadoop Log directory) -# - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename) -# - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender) - -log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger} -log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false -log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender -log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file} -log4j.appender.RMSUMMARY.MaxFileSize=256MB -log4j.appender.RMSUMMARY.MaxBackupIndex=20 -log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout -log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n - -# HS audit log configs -#mapreduce.hs.audit.logger=INFO,HSAUDIT -#log4j.logger.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=${mapreduce.hs.audit.logger} -#log4j.additivity.org.apache.hadoop.mapreduce.v2.hs.HSAuditLogger=false -#log4j.appender.HSAUDIT=org.apache.log4j.DailyRollingFileAppender -#log4j.appender.HSAUDIT.File=${hadoop.log.dir}/hs-audit.log -#log4j.appender.HSAUDIT.layout=org.apache.log4j.PatternLayout -#log4j.appender.HSAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n -#log4j.appender.HSAUDIT.DatePattern=.yyyy-MM-dd - -# Http Server Request Logs -#log4j.logger.http.requests.namenode=INFO,namenoderequestlog -#log4j.appender.namenoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.namenoderequestlog.Filename=${hadoop.log.dir}/jetty-namenode-yyyy_mm_dd.log -#log4j.appender.namenoderequestlog.RetainDays=3 - -#log4j.logger.http.requests.datanode=INFO,datanoderequestlog -#log4j.appender.datanoderequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.datanoderequestlog.Filename=${hadoop.log.dir}/jetty-datanode-yyyy_mm_dd.log -#log4j.appender.datanoderequestlog.RetainDays=3 - -#log4j.logger.http.requests.resourcemanager=INFO,resourcemanagerrequestlog -#log4j.appender.resourcemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.resourcemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-resourcemanager-yyyy_mm_dd.log -#log4j.appender.resourcemanagerrequestlog.RetainDays=3 - -#log4j.logger.http.requests.jobhistory=INFO,jobhistoryrequestlog -#log4j.appender.jobhistoryrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.jobhistoryrequestlog.Filename=${hadoop.log.dir}/jetty-jobhistory-yyyy_mm_dd.log -#log4j.appender.jobhistoryrequestlog.RetainDays=3 - -#log4j.logger.http.requests.nodemanager=INFO,nodemanagerrequestlog -#log4j.appender.nodemanagerrequestlog=org.apache.hadoop.http.HttpRequestLogAppender -#log4j.appender.nodemanagerrequestlog.Filename=${hadoop.log.dir}/jetty-nodemanager-yyyy_mm_dd.log -#log4j.appender.nodemanagerrequestlog.RetainDays=3 - -# Appender for viewing information for errors and warnings -yarn.ewma.cleanupInterval=300 -yarn.ewma.messageAgeLimitSeconds=86400 -yarn.ewma.maxUniqueMessages=250 -log4j.appender.EWMA=org.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppender -log4j.appender.EWMA.cleanupInterval=${yarn.ewma.cleanupInterval} -log4j.appender.EWMA.messageAgeLimitSeconds=${yarn.ewma.messageAgeLimitSeconds} -log4j.appender.EWMA.maxUniqueMessages=${yarn.ewma.maxUniqueMessages} http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.cmd ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.cmd b/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.cmd deleted file mode 100644 index 0d39526..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.cmd +++ /dev/null @@ -1,20 +0,0 @@ -@echo off -@rem Licensed to the Apache Software Foundation (ASF) under one or more -@rem contributor license agreements. See the NOTICE file distributed with -@rem this work for additional information regarding copyright ownership. -@rem The ASF licenses this file to You under the Apache License, Version 2.0 -@rem (the "License"); you may not use this file except in compliance with -@rem the License. You may obtain a copy of the License at -@rem -@rem http://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. - -set HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000 - -set HADOOP_MAPRED_ROOT_LOGGER=%HADOOP_LOGLEVEL%,RFA - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.sh b/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.sh deleted file mode 100644 index 6be1e27..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-env.sh +++ /dev/null @@ -1,27 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one or more -# contributor license agreements. See the NOTICE file distributed with -# this work for additional information regarding copyright ownership. -# The ASF licenses this file to You under the Apache License, Version 2.0 -# (the "License"); you may not use this file except in compliance with -# the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# export JAVA_HOME=/home/y/libexec/jdk1.6.0/ - -export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000 - -export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA - -#export HADOOP_JOB_HISTORYSERVER_OPTS= -#export HADOOP_MAPRED_LOG_DIR="" # Where log files are stored. $HADOOP_MAPRED_HOME/logs by default. -#export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger. -#export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default. -#export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default -#export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0. http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-queues.xml.template ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-queues.xml.template b/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-queues.xml.template deleted file mode 100644 index ce6cd20..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-queues.xml.template +++ /dev/null @@ -1,92 +0,0 @@ - - - - - - - - - - default - - - - - - - running - - - - - - - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-site.xml.template ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-site.xml.template b/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-site.xml.template deleted file mode 100644 index 761c352..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/mapred-site.xml.template +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/slaves ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/slaves b/contrib/hawq-docker/centos7-docker/hawq-test/conf/slaves deleted file mode 100644 index 2fbb50c..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/slaves +++ /dev/null @@ -1 +0,0 @@ -localhost http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-client.xml.example ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-client.xml.example b/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-client.xml.example deleted file mode 100644 index a50dce4..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-client.xml.example +++ /dev/null @@ -1,80 +0,0 @@ - - - - - - - ssl.client.truststore.location - - Truststore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.truststore.password - - Optional. Default value is "". - - - - - ssl.client.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.client.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - - ssl.client.keystore.location - - Keystore to be used by clients like distcp. Must be - specified. - - - - - ssl.client.keystore.password - - Optional. Default value is "". - - - - - ssl.client.keystore.keypassword - - Optional. Default value is "". - - - - - ssl.client.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-server.xml.example ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-server.xml.example b/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-server.xml.example deleted file mode 100644 index 02d300c..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/ssl-server.xml.example +++ /dev/null @@ -1,78 +0,0 @@ - - - - - - - ssl.server.truststore.location - - Truststore to be used by NN and DN. Must be specified. - - - - - ssl.server.truststore.password - - Optional. Default value is "". - - - - - ssl.server.truststore.type - jks - Optional. The keystore file format, default value is "jks". - - - - - ssl.server.truststore.reload.interval - 10000 - Truststore reload check interval, in milliseconds. - Default value is 10000 (10 seconds). - - - - - ssl.server.keystore.location - - Keystore to be used by NN and DN. Must be specified. - - - - - ssl.server.keystore.password - - Must be specified. - - - - - ssl.server.keystore.keypassword - - Must be specified. - - - - - ssl.server.keystore.type - jks - Optional. The keystore file format, default value is "jks". - - - - http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/conf/yarn-env.cmd ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/conf/yarn-env.cmd b/contrib/hawq-docker/centos7-docker/hawq-test/conf/yarn-env.cmd deleted file mode 100644 index 74da35b..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/conf/yarn-env.cmd +++ /dev/null @@ -1,60 +0,0 @@ -@echo off -@rem Licensed to the Apache Software Foundation (ASF) under one or more -@rem contributor license agreements. See the NOTICE file distributed with -@rem this work for additional information regarding copyright ownership. -@rem The ASF licenses this file to You under the Apache License, Version 2.0 -@rem (the "License"); you may not use this file except in compliance with -@rem the License. You may obtain a copy of the License at -@rem -@rem http://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. - -@rem User for YARN daemons -if not defined HADOOP_YARN_USER ( - set HADOOP_YARN_USER=%yarn% -) - -if not defined YARN_CONF_DIR ( - set YARN_CONF_DIR=%HADOOP_YARN_HOME%\conf -) - -if defined YARN_HEAPSIZE ( - @rem echo run with Java heapsize %YARN_HEAPSIZE% - set JAVA_HEAP_MAX=-Xmx%YARN_HEAPSIZE%m -) - -if not defined YARN_LOG_DIR ( - set YARN_LOG_DIR=%HADOOP_YARN_HOME%\logs -) - -if not defined YARN_LOGFILE ( - set YARN_LOGFILE=yarn.log -) - -@rem default policy file for service-level authorization -if not defined YARN_POLICYFILE ( - set YARN_POLICYFILE=hadoop-policy.xml -) - -if not defined YARN_ROOT_LOGGER ( - set YARN_ROOT_LOGGER=%HADOOP_LOGLEVEL%,console -) - -set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.dir=%YARN_LOG_DIR% -set YARN_OPTS=%YARN_OPTS% -Dyarn.log.dir=%YARN_LOG_DIR% -set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.file=%YARN_LOGFILE% -set YARN_OPTS=%YARN_OPTS% -Dyarn.log.file=%YARN_LOGFILE% -set YARN_OPTS=%YARN_OPTS% -Dyarn.home.dir=%HADOOP_YARN_HOME% -set YARN_OPTS=%YARN_OPTS% -Dyarn.id.str=%YARN_IDENT_STRING% -set YARN_OPTS=%YARN_OPTS% -Dhadoop.home.dir=%HADOOP_YARN_HOME% -set YARN_OPTS=%YARN_OPTS% -Dhadoop.root.logger=%YARN_ROOT_LOGGER% -set YARN_OPTS=%YARN_OPTS% -Dyarn.root.logger=%YARN_ROOT_LOGGER% -if defined JAVA_LIBRARY_PATH ( - set YARN_OPTS=%YARN_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH% -) -set YARN_OPTS=%YARN_OPTS% -Dyarn.policy.file=%YARN_POLICYFILE% \ No newline at end of file http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/entrypoint.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/entrypoint.sh b/contrib/hawq-docker/centos7-docker/hawq-test/entrypoint.sh deleted file mode 100755 index abdc508..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/entrypoint.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash - -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -if [ -z "${NAMENODE}" ]; then - export NAMENODE=${HOSTNAME} -fi - -if [ ! -f /etc/profile.d/hadoop.sh ]; then - echo '#!/bin/bash' | sudo tee /etc/profile.d/hadoop.sh - echo "export NAMENODE=${NAMENODE}" | sudo tee -a /etc/profile.d/hadoop.sh - sudo chmod a+x /etc/profile.d/hadoop.sh -fi - -sudo start-hdfs.sh -sudo sysctl -p - -exec "$@" http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1cb29096/contrib/hawq-docker/centos7-docker/hawq-test/start-hdfs.sh ---------------------------------------------------------------------- diff --git a/contrib/hawq-docker/centos7-docker/hawq-test/start-hdfs.sh b/contrib/hawq-docker/centos7-docker/hawq-test/start-hdfs.sh deleted file mode 100755 index f39200d..0000000 --- a/contrib/hawq-docker/centos7-docker/hawq-test/start-hdfs.sh +++ /dev/null @@ -1,39 +0,0 @@ -#!/bin/bash - -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -/usr/sbin/sshd - -if [ -f /etc/profile.d/hadoop.sh ]; then - . /etc/profile.d/hadoop.sh -fi - -if [ "${NAMENODE}" == "${HOSTNAME}" ]; then - if [ ! -d /tmp/hdfs/name/current ]; then - su -l hdfs -c "hdfs namenode -format" - fi - - if [ -z "`ps aux | grep org.apache.hadoop.hdfs.server.namenode.NameNode | grep -v grep`" ]; then - su -l hdfs -c "hadoop-daemon.sh start namenode" - fi -else - if [ -z "`ps aux | grep org.apache.hadoop.hdfs.server.datanode.DataNode | grep -v grep`" ]; then - su -l hdfs -c "hadoop-daemon.sh start datanode" - fi -fi -