Return-Path: X-Original-To: apmail-falcon-commits-archive@minotaur.apache.org Delivered-To: apmail-falcon-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1074419434 for ; Tue, 1 Mar 2016 07:24:16 +0000 (UTC) Received: (qmail 22606 invoked by uid 500); 1 Mar 2016 07:24:16 -0000 Delivered-To: apmail-falcon-commits-archive@falcon.apache.org Received: (qmail 22509 invoked by uid 500); 1 Mar 2016 07:24:15 -0000 Mailing-List: contact commits-help@falcon.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@falcon.apache.org Delivered-To: mailing list commits@falcon.apache.org Received: (qmail 22492 invoked by uid 99); 1 Mar 2016 07:24:15 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 01 Mar 2016 07:24:15 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 29CABE0425; Tue, 1 Mar 2016 07:24:15 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: pallavi@apache.org To: commits@falcon.apache.org Date: Tue, 01 Mar 2016 07:24:15 -0000 Message-Id: X-Mailer: ASF-Git Admin Mailer Subject: [1/7] falcon git commit: Removing addons/ non-docs directory from asf-site branch Repository: falcon Updated Branches: refs/heads/asf-site 8609ffd6f -> 6f5b476cc http://git-wip-us.apache.org/repos/asf/falcon/blob/6f5b476c/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-secure.properties ---------------------------------------------------------------------- diff --git a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-secure.properties b/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-secure.properties deleted file mode 100644 index 8d00bb5..0000000 --- a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-secure.properties +++ /dev/null @@ -1,108 +0,0 @@ -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -##### NOTE: This is a TEMPLATE file which can be copied and edited - -##### Recipe properties -falcon.recipe.name=hive-disaster-recovery - - -##### Workflow properties -falcon.recipe.workflow.name=hive-dr-workflow -# Provide Wf absolute path. This can be HDFS or local FS path. If WF is on local FS it will be copied to HDFS -falcon.recipe.workflow.path=/recipes/hive-replication/hive-disaster-recovery-secure-workflow.xml - -##### Cluster properties - -# Change the cluster name where replication job should run here -falcon.recipe.cluster.name=backupCluster -# Change the cluster hdfs write end point here. This is mandatory. -falcon.recipe.cluster.hdfs.writeEndPoint=hdfs://localhost:8020 -# Change the cluster validity start time here -falcon.recipe.cluster.validity.start=2014-10-01T00:00Z -# Change the cluster validity end time here -falcon.recipe.cluster.validity.end=2016-12-30T00:00Z -# Change the cluster namenode kerberos principal. This is mandatory on secure clusters. -falcon.recipe.nn.principal=nn/_HOST@EXAMPLE.COM - -##### Scheduling properties - -# Change the process frequency here. Valid frequency type are minutes, hours, days, months -falcon.recipe.process.frequency=minutes(60) - -##### Retry policy properties - -falcon.recipe.retry.policy=periodic -falcon.recipe.retry.delay=minutes(30) -falcon.recipe.retry.attempts=3 -falcon.recipe.retry.onTimeout=false - -##### Tag properties - An optional list of comma separated tags, Key Value Pairs, separated by comma -##### Uncomment to add tags -#falcon.recipe.tags=owner=landing,pipeline=adtech - -##### ACL properties - Uncomment and change ACL if authorization is enabled - -#falcon.recipe.acl.owner=testuser -#falcon.recipe.acl.group=group -#falcon.recipe.acl.permission=0x755 - -##### Custom Job properties - -##### Source Cluster DR properties -sourceCluster=primaryCluster -sourceMetastoreUri=thrift://localhost:9083 -sourceHiveServer2Uri=hive2://localhost:10000 -# For DB level replicaiton to replicate multiple databases specify comma separated list of tables -sourceDatabase=default -# For DB level replication specify * for sourceTable. -# For table level replication to replicate multiple tables specify comma separated list of tables -sourceTable=testtable_dr -## Please specify staging dir in the source without fully qualified domain name. -sourceStagingPath=/apps/hive/tools/dr -sourceNN=hdfs://localhost:8020 -# Specify kerberos principal required to access source namenode and hive servers, optional on non-secure cluster. -sourceNNKerberosPrincipal=nn/_HOST@EXAMPLE.COM -sourceHiveMetastoreKerberosPrincipal=hive/_HOST@EXAMPLE.COM -sourceHive2KerberosPrincipal=hive/_HOST@EXAMPLE.COM - -##### Target Cluster DR properties -targetCluster=backupCluster -targetMetastoreUri=thrift://localhost:9083 -targetHiveServer2Uri=hive2://localhost:10000 -## Please specify staging dir in the target without fully qualified domain name. -targetStagingPath=/apps/hive/tools/dr -targetNN=hdfs://localhost:8020 -# Specify kerberos principal required to access target namenode and hive servers, optional on non-secure cluster. -targetNNKerberosPrincipal=nn/_HOST@EXAMPLE.COM -targetHiveMetastoreKerberosPrincipal=hive/_HOST@EXAMPLE.COM -targetHive2KerberosPrincipal=hive/_HOST@EXAMPLE.COM - -# To ceil the max events processed each time job runs. Set it to max value depending on your bandwidth limit. -# Setting it to -1 will process all the events but can hog up the bandwidth. Use it judiciously! -maxEvents=-1 -# Change it to specify the maximum number of mappers for replication -replicationMaxMaps=5 -# Change it to specify the maximum number of mappers for DistCP -distcpMaxMaps=1 -# Change it to specify the bandwidth in MB for each mapper in DistCP -distcpMapBandwidth=100 - -##### Email Notification for Falcon instance completion -falcon.recipe.notification.type=email -falcon.recipe.notification.receivers=NA \ No newline at end of file http://git-wip-us.apache.org/repos/asf/falcon/blob/6f5b476c/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-template.xml ---------------------------------------------------------------------- diff --git a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-template.xml b/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-template.xml deleted file mode 100644 index f0de091..0000000 --- a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-template.xml +++ /dev/null @@ -1,45 +0,0 @@ - - - - - - - - - - - - _falcon_mirroring_type=HIVE - - 1 - - LAST_ONLY - ##process.frequency## - UTC - - - - - - - - - - http://git-wip-us.apache.org/repos/asf/falcon/blob/6f5b476c/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-workflow.xml ---------------------------------------------------------------------- diff --git a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-workflow.xml b/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-workflow.xml deleted file mode 100644 index 296e049..0000000 --- a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery-workflow.xml +++ /dev/null @@ -1,249 +0,0 @@ - - - - - - ${jobTracker} - ${nameNode} - - - oozie.launcher.mapreduce.job.user.classpath.first - true - - - mapred.job.queue.name - ${queueName} - - - oozie.launcher.mapred.job.priority - ${jobPriority} - - - oozie.use.system.libpath - true - - - oozie.action.sharelib.for.java - distcp,hive,hive2,hcatalog - - - org.apache.falcon.hive.HiveDRTool - -Dmapred.job.queue.name=${queueName} - -Dmapred.job.priority=${jobPriority} - -falconLibPath - ${wf:conf("falcon.libpath")} - -sourceCluster - ${sourceCluster} - -sourceMetastoreUri - ${sourceMetastoreUri} - -sourceHiveServer2Uri - ${sourceHiveServer2Uri} - -sourceDatabase - ${sourceDatabase} - -sourceTable - ${sourceTable} - -sourceStagingPath - ${sourceStagingPath} - -sourceNN - ${sourceNN} - -targetCluster - ${targetCluster} - -targetMetastoreUri - ${targetMetastoreUri} - -targetHiveServer2Uri - ${targetHiveServer2Uri} - -targetStagingPath - ${targetStagingPath} - -targetNN - ${targetNN} - -maxEvents - ${maxEvents} - -clusterForJobRun - ${clusterForJobRun} - -clusterForJobRunWriteEP - ${clusterForJobRunWriteEP} - -drJobName - ${drJobName}-${nominalTime} - -executionStage - lastevents - - - - - - - - ${jobTracker} - ${nameNode} - - - oozie.launcher.mapreduce.job.user.classpath.first - true - - - mapred.job.queue.name - ${queueName} - - - oozie.launcher.mapred.job.priority - ${jobPriority} - - - oozie.use.system.libpath - true - - - oozie.action.sharelib.for.java - distcp,hive,hive2,hcatalog - - - org.apache.falcon.hive.HiveDRTool - -Dmapred.job.queue.name=${queueName} - -Dmapred.job.priority=${jobPriority} - -falconLibPath - ${wf:conf("falcon.libpath")} - -replicationMaxMaps - ${replicationMaxMaps} - -distcpMaxMaps - ${distcpMaxMaps} - -sourceCluster - ${sourceCluster} - -sourceMetastoreUri - ${sourceMetastoreUri} - -sourceHiveServer2Uri - ${sourceHiveServer2Uri} - -sourceDatabase - ${sourceDatabase} - -sourceTable - ${sourceTable} - -sourceStagingPath - ${sourceStagingPath} - -sourceNN - ${sourceNN} - -targetCluster - ${targetCluster} - -targetMetastoreUri - ${targetMetastoreUri} - -targetHiveServer2Uri - ${targetHiveServer2Uri} - -targetStagingPath - ${targetStagingPath} - -targetNN - ${targetNN} - -maxEvents - ${maxEvents} - -distcpMapBandwidth - ${distcpMapBandwidth} - -clusterForJobRun - ${clusterForJobRun} - -clusterForJobRunWriteEP - ${clusterForJobRunWriteEP} - -drJobName - ${drJobName}-${nominalTime} - -executionStage - export - -counterLogDir - ${logDir}/job-${nominalTime}/${srcClusterName == 'NA' ? '' : srcClusterName}/ - - - - - - - - ${jobTracker} - ${nameNode} - - - oozie.launcher.mapreduce.job.user.classpath.first - true - - - mapred.job.queue.name - ${queueName} - - - oozie.launcher.mapred.job.priority - ${jobPriority} - - - oozie.use.system.libpath - true - - - oozie.action.sharelib.for.java - distcp,hive,hive2,hcatalog - - - org.apache.falcon.hive.HiveDRTool - -Dmapred.job.queue.name=${queueName} - -Dmapred.job.priority=${jobPriority} - -falconLibPath - ${wf:conf("falcon.libpath")} - -replicationMaxMaps - ${replicationMaxMaps} - -distcpMaxMaps - ${distcpMaxMaps} - -sourceCluster - ${sourceCluster} - -sourceMetastoreUri - ${sourceMetastoreUri} - -sourceHiveServer2Uri - ${sourceHiveServer2Uri} - -sourceDatabase - ${sourceDatabase} - -sourceTable - ${sourceTable} - -sourceStagingPath - ${sourceStagingPath} - -sourceNN - ${sourceNN} - -targetCluster - ${targetCluster} - -targetMetastoreUri - ${targetMetastoreUri} - -targetHiveServer2Uri - ${targetHiveServer2Uri} - -targetStagingPath - ${targetStagingPath} - -targetNN - ${targetNN} - -maxEvents - ${maxEvents} - -distcpMapBandwidth - ${distcpMapBandwidth} - -clusterForJobRun - ${clusterForJobRun} - -clusterForJobRunWriteEP - ${clusterForJobRunWriteEP} - -drJobName - ${drJobName}-${nominalTime} - -executionStage - import - - - - - - - Workflow action failed, error message[${wf:errorMessage(wf:lastErrorNode())}] - - - - http://git-wip-us.apache.org/repos/asf/falcon/blob/6f5b476c/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery.properties ---------------------------------------------------------------------- diff --git a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery.properties b/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery.properties deleted file mode 100644 index b14ec7c..0000000 --- a/addons/recipes/hive-disaster-recovery/src/main/resources/hive-disaster-recovery.properties +++ /dev/null @@ -1,98 +0,0 @@ -# -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# - -##### NOTE: This is a TEMPLATE file which can be copied and edited - -##### Recipe properties -falcon.recipe.name=hive-disaster-recovery - - -##### Workflow properties -falcon.recipe.workflow.name=hive-dr-workflow -# Provide Wf absolute path. This can be HDFS or local FS path. If WF is on local FS it will be copied to HDFS -falcon.recipe.workflow.path=/recipes/hive-replication/hive-disaster-recovery-workflow.xml - -##### Cluster properties - -# Change the cluster name where replication job should run here -falcon.recipe.cluster.name=backupCluster -# Change the cluster hdfs write end point here. This is mandatory. -falcon.recipe.cluster.hdfs.writeEndPoint=hdfs://localhost:8020 -# Change the cluster validity start time here -falcon.recipe.cluster.validity.start=2014-10-01T00:00Z -# Change the cluster validity end time here -falcon.recipe.cluster.validity.end=2016-12-30T00:00Z - -##### Scheduling properties - -# Change the process frequency here. Valid frequency type are minutes, hours, days, months -falcon.recipe.process.frequency=minutes(60) - -##### Retry policy properties - -falcon.recipe.retry.policy=periodic -falcon.recipe.retry.delay=minutes(30) -falcon.recipe.retry.attempts=3 -falcon.recipe.retry.onTimeout=false - -##### Tag properties - An optional list of comma separated tags, Key Value Pairs, separated by comma -##### Uncomment to add tags -#falcon.recipe.tags=owner=landing,pipeline=adtech - -##### ACL properties - Uncomment and change ACL if authorization is enabled - -#falcon.recipe.acl.owner=testuser -#falcon.recipe.acl.group=group -#falcon.recipe.acl.permission=0x755 - -##### Custom Job properties - -##### Source Cluster DR properties -sourceCluster=primaryCluster -sourceMetastoreUri=thrift://localhost:9083 -sourceHiveServer2Uri=hive2://localhost:10000 -# For DB level replicaiton to replicate multiple databases specify comma separated list of tables -sourceDatabase=default -# For DB level replication specify * for sourceTable. -# For table level replication to replicate multiple tables specify comma separated list of tables -sourceTable=testtable_dr -## Please specify staging dir in the source without fully qualified domain name. -sourceStagingPath=/apps/hive/tools/dr -sourceNN=hdfs://localhost:8020 - -##### Target Cluster DR properties -targetCluster=backupCluster -targetMetastoreUri=thrift://localhost:9083 -targetHiveServer2Uri=hive2://localhost:10000 -## Please specify staging dir in the target without fully qualified domain name. -targetStagingPath=/apps/hive/tools/dr -targetNN=hdfs://localhost:8020 - -# To ceil the max events processed each time job runs. Set it to max value depending on your bandwidth limit. -# Setting it to -1 will process all the events but can hog up the bandwidth. Use it judiciously! -maxEvents=-1 -# Change it to specify the maximum number of mappers for replication -replicationMaxMaps=5 -# Change it to specify the maximum number of mappers for DistCP -distcpMaxMaps=1 -# Change it to specify the bandwidth in MB for each mapper in DistCP -distcpMapBandwidth=100 - -##### Email Notification for Falcon instance completion -falcon.recipe.notification.type=email -falcon.recipe.notification.receivers=NA \ No newline at end of file