Return-Path: X-Original-To: apmail-ambari-commits-archive@www.apache.org Delivered-To: apmail-ambari-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6FB701965C for ; Mon, 21 Mar 2016 14:12:01 +0000 (UTC) Received: (qmail 86954 invoked by uid 500); 21 Mar 2016 14:12:01 -0000 Delivered-To: apmail-ambari-commits-archive@ambari.apache.org Received: (qmail 86926 invoked by uid 500); 21 Mar 2016 14:12:01 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 86685 invoked by uid 99); 21 Mar 2016 14:12:01 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 Mar 2016 14:12:01 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 0EDC2DFA43; Mon, 21 Mar 2016 14:12:01 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: alexantonenko@apache.org To: commits@ambari.apache.org Date: Mon, 21 Mar 2016 14:12:03 -0000 Message-Id: <17ffe29191eb4727b12f7f9134004a38@git.apache.org> In-Reply-To: <3dd0f2379a504f76b37bb52342fbe7f5@git.apache.org> References: <3dd0f2379a504f76b37bb52342fbe7f5@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [4/4] ambari git commit: Revert "AMBARI-15487. Add support for ECS stack (Vijay Srinivasaraghavan via smohanty)" Revert "AMBARI-15487. Add support for ECS stack (Vijay Srinivasaraghavan via smohanty)" This reverts commit 4dc612602b4ddd9309ef758f97e8f57df7a64099. Project: http://git-wip-us.apache.org/repos/asf/ambari/repo Commit: http://git-wip-us.apache.org/repos/asf/ambari/commit/f189015a Tree: http://git-wip-us.apache.org/repos/asf/ambari/tree/f189015a Diff: http://git-wip-us.apache.org/repos/asf/ambari/diff/f189015a Branch: refs/heads/branch-2.2 Commit: f189015a345d0b4b008a7baad0f4beef29ea6e8d Parents: 75da576 Author: Alex Antonenko Authored: Mon Mar 21 16:11:37 2016 +0200 Committer: Alex Antonenko Committed: Mon Mar 21 16:11:53 2016 +0200 ---------------------------------------------------------------------- .../libraries/providers/hdfs_resource.py | 14 +- .../libraries/resources/hdfs_resource.py | 3 - ambari-server/sbin/ambari-server | 6 +- .../server/actionmanager/ActionScheduler.java | 12 -- .../actionmanager/ExecutionCommandWrapper.java | 2 + .../ambari/server/agent/ExecutionCommand.java | 23 -- .../controller/AmbariActionExecutionHelper.java | 13 +- .../AmbariCustomCommandExecutionHelper.java | 34 +-- .../AmbariManagementControllerImpl.java | 26 +-- .../server/controller/StackServiceResponse.java | 12 +- .../internal/ClientConfigResourceProvider.java | 5 +- .../internal/StackServiceResourceProvider.java | 6 - .../apache/ambari/server/state/ServiceInfo.java | 13 +- .../server/state/cluster/ClusterImpl.java | 7 +- ambari-server/src/main/python/ambari-server.py | 27 +-- .../main/python/ambari_server/enableStack.py | 94 -------- .../main/python/ambari_server/setupActions.py | 3 +- .../1.6.1.2.2.0/package/scripts/params.py | 6 +- .../0.1.0/package/scripts/hbase.py | 6 +- .../0.1.0/package/scripts/params_linux.py | 3 - .../0.5.0.2.1/package/scripts/params_linux.py | 5 +- .../0.96.0.2.0/package/scripts/params_linux.py | 6 +- .../2.1.0.2.0/package/scripts/params_linux.py | 5 +- .../0.12.0.2.0/package/scripts/params_linux.py | 5 +- .../MAHOUT/1.0.0.2.3/package/scripts/params.py | 5 +- .../4.0.0.2.0/package/scripts/params_linux.py | 6 +- .../0.12.0.2.0/package/scripts/params_linux.py | 5 +- .../SPARK/1.2.0.2.2/package/scripts/params.py | 4 +- .../0.4.0.2.1/package/scripts/params_linux.py | 5 +- .../package/scripts/mapred_service_check.py | 2 - .../2.1.0.2.0/package/scripts/params_linux.py | 7 +- .../YARN/2.1.0.2.0/package/scripts/yarn.py | 17 +- .../2.0.6/hooks/after-INSTALL/scripts/params.py | 5 +- .../scripts/shared_initialization.py | 2 +- .../HDP/2.0.6/hooks/before-ANY/scripts/hook.py | 2 +- .../2.0.6/hooks/before-ANY/scripts/params.py | 4 +- .../before-ANY/scripts/shared_initialization.py | 3 +- .../2.0.6/hooks/before-START/scripts/params.py | 41 +--- .../scripts/shared_initialization.py | 32 +-- .../resources/stacks/HDP/2.3.ECS/metainfo.xml | 23 -- .../stacks/HDP/2.3.ECS/repos/repoinfo.xml | 122 ----------- .../stacks/HDP/2.3.ECS/role_command_order.json | 10 - .../HDP/2.3.ECS/services/ACCUMULO/metainfo.xml | 28 --- .../HDP/2.3.ECS/services/ATLAS/metainfo.xml | 28 --- .../services/ECS/configuration/core-site.xml | 129 ----------- .../services/ECS/configuration/hadoop-env.xml | 130 ----------- .../services/ECS/configuration/hdfs-site.xml | 40 ---- .../HDP/2.3.ECS/services/ECS/kerberos.json | 53 ----- .../HDP/2.3.ECS/services/ECS/metainfo.xml | 84 -------- .../services/ECS/package/scripts/ecs_client.py | 112 ---------- .../services/ECS/package/scripts/params.py | 82 ------- .../ECS/package/scripts/service_check.py | 47 ---- .../HDP/2.3.ECS/services/FALCON/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/FLUME/metainfo.xml | 27 --- .../services/HBASE/configuration/hbase-env.xml | 107 --------- .../services/HBASE/configuration/hbase-site.xml | 27 --- .../HDP/2.3.ECS/services/HBASE/kerberos.json | 132 ------------ .../HDP/2.3.ECS/services/HBASE/metainfo.xml | 58 ----- .../HDP/2.3.ECS/services/HDFS/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/HIVE/metainfo.xml | 91 -------- .../HDP/2.3.ECS/services/KAFKA/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/KERBEROS/metainfo.xml | 26 --- .../HDP/2.3.ECS/services/KNOX/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/MAHOUT/metainfo.xml | 28 --- .../HDP/2.3.ECS/services/OOZIE/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/RANGER/metainfo.xml | 32 --- .../2.3.ECS/services/RANGER_KMS/metainfo.xml | 30 --- .../HDP/2.3.ECS/services/SLIDER/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/SPARK/metainfo.xml | 30 --- .../HDP/2.3.ECS/services/SQOOP/metainfo.xml | 27 --- .../HDP/2.3.ECS/services/STORM/metainfo.xml | 28 --- .../services/TEZ/configuration/tez-site.xml | 27 --- .../HDP/2.3.ECS/services/TEZ/metainfo.xml | 59 ----- .../YARN/configuration-mapred/mapred-site.xml | 34 --- .../services/YARN/configuration/yarn-site.xml | 29 --- .../HDP/2.3.ECS/services/YARN/kerberos.json | 215 ------------------- .../HDP/2.3.ECS/services/YARN/metainfo.xml | 145 ------------- .../HDP/2.3.ECS/services/ZOOKEEPER/metainfo.xml | 51 ----- .../AmbariManagementControllerTest.java | 18 +- .../ambari/server/stack/StackManagerTest.java | 15 +- .../ambari/server/state/ServiceInfoTest.java | 8 - .../AMBARI_METRICS/test_metrics_collector.py | 2 - .../stacks/2.0.6/HBASE/test_hbase_master.py | 9 - .../python/stacks/2.0.6/HDFS/test_namenode.py | 27 --- .../stacks/2.0.6/HDFS/test_service_check.py | 4 - .../stacks/2.0.6/HIVE/test_hive_server.py | 14 -- .../2.0.6/HIVE/test_hive_service_check.py | 6 - .../stacks/2.0.6/OOZIE/test_oozie_server.py | 16 -- .../stacks/2.0.6/OOZIE/test_service_check.py | 5 - .../stacks/2.0.6/PIG/test_pig_service_check.py | 6 - .../stacks/2.0.6/YARN/test_historyserver.py | 18 +- .../2.0.6/YARN/test_mapreduce2_service_check.py | 6 - .../stacks/2.1/FALCON/test_falcon_server.py | 6 - .../python/stacks/2.1/TEZ/test_service_check.py | 8 - .../stacks/2.1/YARN/test_apptimelineserver.py | 1 - .../stacks/2.2/PIG/test_pig_service_check.py | 6 - .../stacks/2.2/SPARK/test_job_history_server.py | 6 - .../2.3/MAHOUT/test_mahout_service_check.py | 4 - .../2.3/SPARK/test_spark_thrift_server.py | 2 - .../test/python/stacks/2.3/YARN/test_ats_1_5.py | 5 - .../resources/stacks/HDP/2.2.0.ECS/metainfo.xml | 24 --- .../stacks/HDP/2.2.0.ECS/repos/hdp.json | 10 - .../stacks/HDP/2.2.0.ECS/repos/repoinfo.xml | 62 ------ .../HDP/2.2.0.ECS/services/ECS/metainfo.xml | 35 --- .../HDP/2.2.0.ECS/services/HDFS/metainfo.xml | 28 --- .../app/controllers/wizard/step4_controller.js | 35 ++- ambari-web/app/data/HDP2/site_properties.js | 15 -- ambari-web/app/mappers/stack_service_mapper.js | 1 - .../app/mixins/common/configs/configs_saver.js | 4 +- ambari-web/app/models/stack_service.js | 6 +- .../mixins/common/configs/configs_saver_test.js | 49 +---- .../ambari/fast_hdfs_resource/Runner.java | 8 +- dev-support/docker/docker/Dockerfile | 16 +- 113 files changed, 99 insertions(+), 3115 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py ---------------------------------------------------------------------- diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index bd4e571..d200956 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -52,8 +52,8 @@ RESOURCE_TO_JSON_FIELDS = { 'recursive_chown': 'recursiveChown', 'recursive_chmod': 'recursiveChmod', 'change_permissions_for_parents': 'changePermissionforParents', - 'manage_if_exists': 'manageIfExists', - 'dfs_type': 'dfs_type' + 'manage_if_exists': 'manageIfExists' + } class HdfsResourceJar: @@ -404,11 +404,9 @@ class HdfsResourceWebHDFS: class HdfsResourceProvider(Provider): def __init__(self, resource): super(HdfsResourceProvider,self).__init__(resource) + self.assert_parameter_is_set('hdfs_site') self.ignored_resources_list = HdfsResourceProvider.get_ignored_resources_list(self.resource.hdfs_resource_ignore_file) - self.fsType = getattr(resource, 'dfs_type') - if self.fsType != 'HCFS': - self.assert_parameter_is_set('hdfs_site') - self.webhdfs_enabled = self.resource.hdfs_site['dfs.webhdfs.enabled'] + self.webhdfs_enabled = self.resource.hdfs_site['dfs.webhdfs.enabled'] @staticmethod def parse_path(path): @@ -469,9 +467,7 @@ class HdfsResourceProvider(Provider): self.get_hdfs_resource_executor().action_execute(self) def get_hdfs_resource_executor(self): - if self.fsType == 'HCFS': - return HdfsResourceJar() - elif WebHDFSUtil.is_webhdfs_available(self.webhdfs_enabled, self.resource.default_fs): + if WebHDFSUtil.is_webhdfs_available(self.webhdfs_enabled, self.resource.default_fs): return HdfsResourceWebHDFS() else: return HdfsResourceJar() http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-common/src/main/python/resource_management/libraries/resources/hdfs_resource.py ---------------------------------------------------------------------- diff --git a/ambari-common/src/main/python/resource_management/libraries/resources/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/resources/hdfs_resource.py index 18e61fb..03221ac 100644 --- a/ambari-common/src/main/python/resource_management/libraries/resources/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/resources/hdfs_resource.py @@ -99,9 +99,6 @@ class HdfsResource(Resource): hdfs_site = ResourceArgument() default_fs = ResourceArgument() - # To support HCFS - dfs_type = ResourceArgument(default="") - #action 'execute' immediately creates all pending files/directories in efficient manner #action 'create_delayed/delete_delayed' adds file/directory to list of pending directories actions = Resource.actions + ["create_on_execute", "delete_on_execute", "execute"] http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/sbin/ambari-server ---------------------------------------------------------------------- diff --git a/ambari-server/sbin/ambari-server b/ambari-server/sbin/ambari-server index 6df8643..ef4344c 100755 --- a/ambari-server/sbin/ambari-server +++ b/ambari-server/sbin/ambari-server @@ -146,13 +146,9 @@ case "$1" in echo -e "Cleanup database..." $PYTHON /usr/sbin/ambari-server.py $@ ;; - enable-stack) - echo -e "Enabling stack(s)..." - $PYTHON /usr/sbin/ambari-server.py $@ - ;; *) echo "Usage: $AMBARI_PYTHON_EXECUTABLE - {start|stop|restart|setup|setup-jce|upgrade|status|upgradestack|setup-ldap|sync-ldap|set-current|setup-security|setup-sso|refresh-stack-hash|backup|restore|update-host-names|check-database|db-cleanup|enable-stack} [options] + {start|stop|restart|setup|setup-jce|upgrade|status|upgradestack|setup-ldap|sync-ldap|set-current|setup-security|setup-sso|refresh-stack-hash|backup|restore|update-host-names|check-database|db-cleanup} [options] Use $AMBARI_PYTHON_EXECUTABLE --help to get details on options available. Or, simply invoke ambari-server.py --help to print the options." exit 1 http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ActionScheduler.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ActionScheduler.java b/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ActionScheduler.java index 9d6b7d6..5753361 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ActionScheduler.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ActionScheduler.java @@ -32,7 +32,6 @@ import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.TimeUnit; import org.apache.ambari.server.AmbariException; -import org.apache.ambari.server.ClusterNotFoundException; import org.apache.ambari.server.Role; import org.apache.ambari.server.RoleCommand; import org.apache.ambari.server.ServiceComponentHostNotFoundException; @@ -953,17 +952,6 @@ class ActionScheduler implements Runnable { commandParamsCmd.putAll(commandParams); cmd.setCommandParams(commandParamsCmd); - try { - Cluster cluster = clusters.getCluster(s.getClusterName()); - if (null != cluster) { - // Generate localComponents - for (ServiceComponentHost sch : cluster.getServiceComponentHosts(hostname)) { - cmd.getLocalComponents().add(sch.getServiceComponentName()); - } - } - } catch (ClusterNotFoundException cnfe) { - //NOP - } //Try to get hostParams from cache and merge them with command-level parameters Map hostParams = hostParamsStageCache.getIfPresent(stagePk); http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ExecutionCommandWrapper.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ExecutionCommandWrapper.java b/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ExecutionCommandWrapper.java index 52febc4..99d61af 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ExecutionCommandWrapper.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/actionmanager/ExecutionCommandWrapper.java @@ -19,6 +19,7 @@ package org.apache.ambari.server.actionmanager; import java.util.HashMap; import java.util.Map; +import java.util.Map.Entry; import java.util.Set; import java.util.TreeMap; @@ -30,6 +31,7 @@ import org.apache.ambari.server.orm.dao.HostRoleCommandDAO; import org.apache.ambari.server.state.Cluster; import org.apache.ambari.server.state.Clusters; import org.apache.ambari.server.state.ConfigHelper; +import org.apache.ambari.server.state.DesiredConfig; import org.apache.ambari.server.utils.StageUtils; import com.google.inject.Inject; http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/agent/ExecutionCommand.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/agent/ExecutionCommand.java b/ambari-server/src/main/java/org/apache/ambari/server/agent/ExecutionCommand.java index 402a338..4ffc663 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/agent/ExecutionCommand.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/agent/ExecutionCommand.java @@ -99,18 +99,12 @@ public class ExecutionCommand extends AgentCommand { @SerializedName("serviceName") private String serviceName; - @SerializedName("serviceType") - private String serviceType; - @SerializedName("componentName") private String componentName; @SerializedName("kerberosCommandParams") private List> kerberosCommandParams = new ArrayList>(); - @SerializedName("localComponents") - private Set localComponents = new HashSet(); - public String getCommandId() { return commandId; } @@ -253,14 +247,6 @@ public class ExecutionCommand extends AgentCommand { this.forceRefreshConfigTagsBeforeExecution = forceRefreshConfigTagsBeforeExecution; } - public Set getLocalComponents() { - return localComponents; - } - - public void setLocalComponents(Set localComponents) { - this.localComponents = localComponents; - } - public Map>> getConfigurationAttributes() { return configurationAttributes; } @@ -284,14 +270,6 @@ public class ExecutionCommand extends AgentCommand { public void setServiceName(String serviceName) { this.serviceName = serviceName; } - - public String getServiceType() { - return serviceType; - } - - public void setServiceType(String serviceType) { - this.serviceType = serviceType; - } public String getComponentName() { return componentName; @@ -342,7 +320,6 @@ public class ExecutionCommand extends AgentCommand { String SERVICE_PACKAGE_FOLDER = "service_package_folder"; String HOOKS_FOLDER = "hooks_folder"; String STACK_NAME = "stack_name"; - String SERVICE_TYPE = "service_type"; String STACK_VERSION = "stack_version"; String SERVICE_REPO_INFO = "service_repo_info"; String PACKAGE_LIST = "package_list"; http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariActionExecutionHelper.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariActionExecutionHelper.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariActionExecutionHelper.java index b7b66b8..ba3163b 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariActionExecutionHelper.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariActionExecutionHelper.java @@ -29,7 +29,6 @@ import static org.apache.ambari.server.agent.ExecutionCommand.KeyNames.STACK_NAM import static org.apache.ambari.server.agent.ExecutionCommand.KeyNames.STACK_VERSION; import java.util.Arrays; -import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Map; @@ -447,16 +446,8 @@ public class AmbariActionExecutionHelper { execCmd.setForceRefreshConfigTagsBeforeExecution(configsToRefresh); } - } - - if (null != cluster) { - // Generate localComponents - for (ServiceComponentHost sch : cluster.getServiceComponentHosts(hostName)) { - execCmd.getLocalComponents().add(sch.getServiceComponentName()); - } - } - - } + } + } } /* http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java index c43c9b5..5688df2 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariCustomCommandExecutionHelper.java @@ -53,7 +53,6 @@ import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; -import java.util.Iterator; import java.util.Set; import java.util.TreeMap; @@ -590,25 +589,7 @@ public class AmbariCustomCommandExecutionHelper { execCmd.setClusterHostInfo( StageUtils.getClusterHostInfo(cluster)); - // Generate localComponents - for (ServiceComponentHost sch : cluster.getServiceComponentHosts(hostname)) { - execCmd.getLocalComponents().add(sch.getServiceComponentName()); - } - Map commandParams = new TreeMap(); - - //Propagate HCFS service type info - Iterator it = cluster.getServices().values().iterator(); - while(it.hasNext()) { - ServiceInfo serviceInfoInstance = ambariMetaInfo.getService(stackId.getStackName(),stackId.getStackVersion(), it.next().getName()); - LOG.info("Iterating service type Instance in addServiceCheckAction:: " + serviceInfoInstance.getName()); - if(serviceInfoInstance.getServiceType() != null) { - LOG.info("Adding service type info in addServiceCheckAction:: " + serviceInfoInstance.getServiceType()); - commandParams.put("dfs_type",serviceInfoInstance.getServiceType()); - break; - } - } - String commandTimeout = configs.getDefaultAgentTaskTimeout(false); @@ -934,7 +915,7 @@ public class AmbariCustomCommandExecutionHelper { * * @param actionExecutionContext received request to execute a command * @param stage the initial stage for task creation - * @param requestParams the request params + * @param retryAllowed indicates whether the the command allows retry * * @throws AmbariException if the commands can not be added */ @@ -1118,19 +1099,6 @@ public class AmbariCustomCommandExecutionHelper { hostParamsStage.put(CLIENTS_TO_UPDATE_CONFIGS, clientsToUpdateConfigs); } clusterHostInfoJson = StageUtils.getGson().toJson(clusterHostInfo); - - //Propogate HCFS service type info to command params - Iterator it = cluster.getServices().values().iterator(); - while(it.hasNext()) { - ServiceInfo serviceInfoInstance = ambariMetaInfo.getService(stackId.getStackName(),stackId.getStackVersion(), it.next().getName()); - LOG.info("Iterating service type Instance in getCommandJson:: " + serviceInfoInstance.getName()); - if(serviceInfoInstance.getServiceType() != null) { - LOG.info("Adding service type info in getCommandJson:: " + serviceInfoInstance.getServiceType()); - commandParamsStage.put("dfs_type",serviceInfoInstance.getServiceType()); - break; - } - } - } String hostParamsStageJson = StageUtils.getGson().toJson(hostParamsStage); http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java index 9644c4b..ac2fb22 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java @@ -151,7 +151,6 @@ import java.util.Collections; import java.util.EnumMap; import java.util.HashMap; import java.util.HashSet; -import java.util.Iterator; import java.util.LinkedHashSet; import java.util.LinkedList; import java.util.List; @@ -1930,31 +1929,17 @@ public class AmbariManagementControllerImpl implements AmbariManagementControlle Host host = clusters.getHost(scHost.getHostName()); - LOG.info("Adding service type info in createHostAction:: " + serviceInfo.getServiceType()); - execCmd.setServiceType(serviceInfo.getServiceType()); - execCmd.setConfigurations(configurations); execCmd.setConfigurationAttributes(configurationAttributes); execCmd.setConfigurationTags(configTags); + + // Create a local copy for each command Map commandParams = new TreeMap(); if (commandParamsInp != null) { // if not defined commandParams.putAll(commandParamsInp); } - - //Propogate HCFS service type info - Iterator it = cluster.getServices().values().iterator(); - while(it.hasNext()) { - ServiceInfo serviceInfoInstance = ambariMetaInfo.getService(stackId.getStackName(),stackId.getStackVersion(), it.next().getName()); - LOG.info("Iterating service type Instance in createHostAction:: " + serviceInfoInstance.getName()); - if(serviceInfoInstance.getServiceType() != null) { - LOG.info("Adding service type info in createHostAction:: " + serviceInfoInstance.getServiceType()); - commandParams.put("dfs_type",serviceInfoInstance.getServiceType()); - break; - } - } - boolean isInstallCommand = roleCommand.equals(RoleCommand.INSTALL); String agentDefaultCommandTimeout = configs.getDefaultAgentTaskTimeout(isInstallCommand); String scriptCommandTimeout = ""; @@ -2579,13 +2564,6 @@ public class AmbariManagementControllerImpl implements AmbariManagementControlle ec.setClusterHostInfo( StageUtils.getClusterHostInfo(cluster)); - if (null != cluster) { - // Generate localComponents - for (ServiceComponentHost sch : cluster.getServiceComponentHosts(scHost.getHostName())) { - ec.getLocalComponents().add(sch.getServiceComponentName()); - } - } - // Hack - Remove passwords from configs if ((ec.getRole().equals(Role.HIVE_CLIENT.toString()) || ec.getRole().equals(Role.WEBHCAT_SERVER.toString()) || http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/controller/StackServiceResponse.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/StackServiceResponse.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/StackServiceResponse.java index d16f4d6..d17fc32 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/StackServiceResponse.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/StackServiceResponse.java @@ -33,7 +33,6 @@ public class StackServiceResponse { private String stackName; private String stackVersion; private String serviceName; - private String serviceType; private String serviceDisplayName; private String userName; private String comments; @@ -62,7 +61,6 @@ public class StackServiceResponse { */ public StackServiceResponse(ServiceInfo service) { serviceName = service.getName(); - serviceType = service.getServiceType(); serviceDisplayName = service.getDisplayName(); userName = null; comments = service.getComment(); @@ -109,16 +107,8 @@ public class StackServiceResponse { public void setServiceName(String serviceName) { this.serviceName = serviceName; } - - public String getServiceType() { - return serviceType; - } - - public void setServiceType(String serviceType) { - this.serviceType = serviceType; - } -public String getServiceDisplayName() { + public String getServiceDisplayName() { return serviceDisplayName; } http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java index 4723d2a..fb6b63e 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java @@ -410,12 +410,15 @@ public class ClientConfigResourceProvider extends AbstractControllerResourceProv } catch (TimeoutException e) { LOG.error("Generate client configs script was killed due to timeout ", e); throw new SystemException("Generate client configs script was killed due to timeout ", e); - } catch (InterruptedException | IOException e) { + } catch (InterruptedException e) { LOG.error("Failed to run generate client configs script for a component " + componentName, e); throw new SystemException("Failed to run generate client configs script for a component " + componentName, e); } catch (ExecutionException e) { LOG.error(e.getMessage(),e); throw new SystemException(e.getMessage() + " " + e.getCause()); + } catch (IOException e) { + LOG.error("Failed to run generate client configs script for a component " + componentName, e); + throw new SystemException("Failed to run generate client configs script for a component " + componentName, e); } } catch (AmbariException e) { http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackServiceResourceProvider.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackServiceResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackServiceResourceProvider.java index dffc74c..130129a 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackServiceResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/StackServiceResourceProvider.java @@ -40,9 +40,6 @@ public class StackServiceResourceProvider extends ReadOnlyResourceProvider { protected static final String SERVICE_NAME_PROPERTY_ID = PropertyHelper.getPropertyId( "StackServices", "service_name"); - - protected static final String SERVICE_TYPE_PROPERTY_ID = PropertyHelper.getPropertyId( - "StackServices", "service_type"); public static final String STACK_NAME_PROPERTY_ID = PropertyHelper.getPropertyId( "StackServices", "stack_name"); @@ -127,9 +124,6 @@ public class StackServiceResourceProvider extends ReadOnlyResourceProvider { setResourceProperty(resource, SERVICE_NAME_PROPERTY_ID, response.getServiceName(), requestedIds); - - setResourceProperty(resource, SERVICE_TYPE_PROPERTY_ID, - response.getServiceType(), requestedIds); setResourceProperty(resource, SERVICE_DISPLAY_NAME_PROPERTY_ID, response.getServiceDisplayName(), requestedIds); http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/state/ServiceInfo.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/state/ServiceInfo.java b/ambari-server/src/main/java/org/apache/ambari/server/state/ServiceInfo.java index d9a8a51..b476f0e 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/state/ServiceInfo.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/state/ServiceInfo.java @@ -56,7 +56,6 @@ public class ServiceInfo implements Validable{ private String displayName; private String version; private String comment; - private String serviceType; private List properties; @XmlElementWrapper(name="components") @@ -254,16 +253,8 @@ public class ServiceInfo implements Validable{ public void setDisplayName(String displayName) { this.displayName = displayName; } - - public String getServiceType() { - return serviceType; - } - - public void setServiceType(String serviceType) { - this.serviceType = serviceType; - } -public String getVersion() { + public String getVersion() { return version; } @@ -354,8 +345,6 @@ public String getVersion() { StringBuilder sb = new StringBuilder(); sb.append("Service name:"); sb.append(name); - sb.append("\nService type:"); - sb.append(serviceType); sb.append("\nversion:"); sb.append(version); sb.append("\ncomment:"); http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java b/ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java index 7c110f4..916f60b 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/state/cluster/ClusterImpl.java @@ -2677,11 +2677,10 @@ public class ClusterImpl implements Cluster { serviceName = entry.getKey(); break; } else if (!serviceName.equals(entry.getKey())) { - String error = String.format("Updating configs for multiple services by a " + - "single API request isn't supported. Conflicting services %s and %s for %s", - serviceName, entry.getKey(), config.getType()); + String error = "Updating configs for multiple services by a " + + "single API request isn't supported"; IllegalArgumentException exception = new IllegalArgumentException(error); - LOG.error(error + ", config version not created for {}", serviceName); + LOG.error(error + ", config version not created"); throw exception; } else { break; http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/python/ambari-server.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/python/ambari-server.py b/ambari-server/src/main/python/ambari-server.py index 5b316a1..2faaecd 100755 --- a/ambari-server/src/main/python/ambari-server.py +++ b/ambari-server/src/main/python/ambari-server.py @@ -39,12 +39,11 @@ from ambari_server.setupHttps import setup_https, setup_truststore from ambari_server.dbCleanup import db_cleanup from ambari_server.hostUpdate import update_host_names from ambari_server.checkDatabase import check_database -from ambari_server.enableStack import enable_stack_version from ambari_server.setupActions import BACKUP_ACTION, LDAP_SETUP_ACTION, LDAP_SYNC_ACTION, PSTART_ACTION, \ REFRESH_STACK_HASH_ACTION, RESET_ACTION, RESTORE_ACTION, UPDATE_HOST_NAMES_ACTION, CHECK_DATABASE_ACTION, \ SETUP_ACTION, SETUP_SECURITY_ACTION,START_ACTION, STATUS_ACTION, STOP_ACTION, UPGRADE_ACTION, UPGRADE_STACK_ACTION, \ - SETUP_JCE_ACTION, SET_CURRENT_ACTION,DB_CLEANUP_ACTION, ENABLE_STACK_ACTION + SETUP_JCE_ACTION, SET_CURRENT_ACTION,DB_CLEANUP_ACTION from ambari_server.setupSecurity import setup_ldap, sync_ldap, setup_master_key, setup_ambari_krb5_jaas from ambari_server.userInput import get_validated_string_input @@ -379,10 +378,6 @@ def init_parser_options(parser): parser.add_option('--version-display-name', default=None, help="Display name of desired repo version", dest="desired_repo_version") parser.add_option('--force-version', action="store_true", default=False, help="Force version to current", dest="force_repo_version") parser.add_option("-d", "--from-date", dest="cleanup_from_date", default=None, type="string", help="Specify date for the cleanup process in 'yyyy-MM-dd' format") - parser.add_option('--version', dest="stack_versions", default=None, action="append", type="string", - help="Specify stack version that needs to be enabled. All other stacks versions will be disabled") - parser.add_option('--stack', dest="stack_name", default=None, type="string", - help="Specify stack name for the stack versions that needs to be enabled") @OsFamilyFuncImpl(OSConst.WINSRV_FAMILY) def are_cmd_line_db_args_blank(options): @@ -530,8 +525,7 @@ def create_user_action_map(args, options): RESTORE_ACTION: UserActionPossibleArgs(restore, [1, 2], args), UPDATE_HOST_NAMES_ACTION: UserActionPossibleArgs(update_host_names, [2], args, options), CHECK_DATABASE_ACTION: UserAction(check_database, options), - DB_CLEANUP_ACTION: UserAction(db_cleanup, options), - ENABLE_STACK_ACTION: UserAction(enable_stack, options, args) + DB_CLEANUP_ACTION: UserAction(db_cleanup, options) } return action_map @@ -642,23 +636,6 @@ def mainBody(): print_error_msg("Unexpected {0}: {1}".format((e).__class__.__name__, str(e)) +\ "\nFor more info run ambari-server with -v or --verbose option") sys.exit(1) - -@OsFamilyFuncImpl(OsFamilyImpl.DEFAULT) -def enable_stack(options, args): - if options.stack_name == None: - print_error_msg ("Please provide stack name using --stack option") - return -1 - if options.stack_versions == None: - print_error_msg ("Please provide stack version using --version option") - return -1 - print_info_msg ("Going to enable Stack Versions: " + str(options.stack_versions) + " for the stack: " + str(options.stack_name)) - retcode = enable_stack_version(options.stack_name,options.stack_versions) - if retcode == 0: - status, pid = is_server_runing() - if status: - print "restarting ambari server" - stop(options) - start(options) if __name__ == "__main__": http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/python/ambari_server/enableStack.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/python/ambari_server/enableStack.py b/ambari-server/src/main/python/ambari_server/enableStack.py deleted file mode 100644 index bf064bd..0000000 --- a/ambari-server/src/main/python/ambari_server/enableStack.py +++ /dev/null @@ -1,94 +0,0 @@ -#!/usr/bin/env python - -''' -Licensed to the Apache Software Foundation (ASF) under one -or more contributor license agreements. See the NOTICE file -distributed with this work for additional information -regarding copyright ownership. The ASF licenses this file -to you under the Apache License, Version 2.0 (the -"License"); you may not use this file except in compliance -with the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -''' - -import os -import re -import fileinput - -from ambari_commons.exceptions import FatalException -from ambari_commons.logging_utils import print_info_msg, print_warning_msg, print_error_msg, get_verbose -from ambari_commons.os_utils import is_root -from ambari_server.serverConfiguration import get_ambari_properties, get_stack_location -from ambari_server.serverUtils import is_server_runing - -# -# Stack enable/disable -# - -def enable_stack_version(stack_name, stack_versions): - if not is_root(): - err = 'Ambari-server enable-stack should be run with ' \ - 'root-level privileges' - raise FatalException(4, err) - - try: - print_info_msg("stack name requested: " + str(stack_name)) - print_info_msg("stack version requested: " + str(stack_versions)) - except IndexError: - raise FatalException("Invalid stack version passed") - - retcode = update_stack_metainfo(stack_name,stack_versions) - - if not retcode == 0: - raise FatalException(retcode, 'Stack enable request failed.') - - return retcode - -def update_stack_metainfo(stack_name, stack_versions): - properties = get_ambari_properties() - if properties == -1: - print_error_msg("Error getting ambari properties") - return -1 - - stack_location = get_stack_location(properties) - print_info_msg ("stack location: "+ stack_location) - - stack_root = os.path.join(stack_location, stack_name) - print_info_msg ("stack root: "+ stack_root) - if not os.path.exists(stack_root): - print_error_msg("stack directory does not exists: " + stack_root) - return -1 - - for stack in stack_versions: - if stack not in os.listdir(stack_root): - print_error_msg ("The requested stack version: " + stack + " is not available in the HDP stack") - return -1 - - for directory in os.listdir(stack_root): - print_info_msg("directory found: " + directory) - metainfo_file = os.path.join(stack_root, directory, "metainfo.xml") - print_info_msg("looking for metainfo file: " + metainfo_file) - if not os.path.exists(metainfo_file): - print_error_msg("Could not find metainfo file in the path " + metainfo_file) - continue - if directory in stack_versions: - print_info_msg ("updating stack to active for: " + directory ) - replace(metainfo_file,"false","true") - else: - print_info_msg ("updating stack to inactive for: " + directory ) - replace(metainfo_file,"true","false") - return 0 - -def replace(file_path, pattern, subst): - for line in fileinput.input(file_path, inplace=1): - line = re.sub(pattern,subst, line.rstrip()) - print(line) - - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/python/ambari_server/setupActions.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/python/ambari_server/setupActions.py b/ambari-server/src/main/python/ambari_server/setupActions.py index 3cbd0ab..edc9aba 100644 --- a/ambari-server/src/main/python/ambari_server/setupActions.py +++ b/ambari-server/src/main/python/ambari_server/setupActions.py @@ -40,5 +40,4 @@ CHECK_DATABASE_ACTION = "check-database" BACKUP_ACTION = "backup" RESTORE_ACTION = "restore" SETUP_JCE_ACTION = "setup-jce" -DB_CLEANUP_ACTION = "db-cleanup" -ENABLE_STACK_ACTION = "enable-stack" +DB_CLEANUP_ACTION = "db-cleanup" \ No newline at end of file http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py b/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py index cf49687..5787cb9 100644 --- a/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py +++ b/ambari-server/src/main/resources/common-services/ACCUMULO/1.6.1.2.2.0/package/scripts/params.py @@ -177,9 +177,6 @@ hdfs_principal_name = config['configurations']['hadoop-env']['hdfs_principal_nam hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] - -dfs_type = default("/commandParams/dfs_type", "") - # dfs.namenode.https-address import functools #create partial functions with common arguments for every HdfsResource call @@ -196,6 +193,5 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py index c807228..02833ab 100644 --- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py +++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase.py @@ -215,16 +215,14 @@ def hbase(name=None # 'master' or 'regionserver' or 'client' type="directory", action="create_on_execute", owner=params.hbase_user, - mode=0775, - dfs_type=params.dfs_type + mode=0775 ) params.HdfsResource(params.hbase_staging_dir, type="directory", action="create_on_execute", owner=params.hbase_user, - mode=0711, - dfs_type=params.dfs_type + mode=0711 ) params.HdfsResource(None, action="execute") http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py index 21b491d..838e987 100644 --- a/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/params_linux.py @@ -48,6 +48,3 @@ hbase_conf_dir = "/etc/ams-hbase/conf" limits_conf_dir = "/etc/security/limits.d" sudo = AMBARI_SUDO_BINARY - -dfs_type = default("/commandParams/dfs_type", "") - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py index f429aa7..d442eed 100644 --- a/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/package/scripts/params_linux.py @@ -112,8 +112,6 @@ dfs_data_mirroring_dir = "/apps/data-mirroring" hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code @@ -129,7 +127,6 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py index 3841575..fba3109 100644 --- a/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py @@ -227,9 +227,6 @@ hdfs_principal_name = config['configurations']['hadoop-env']['hdfs_principal_nam hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] - -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code @@ -245,8 +242,7 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) # ranger host http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py index 751acb2..60dfc6e 100644 --- a/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py @@ -322,8 +322,6 @@ else: hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete/copyfromlocal hdfs directories/files we need to call params.HdfsResource in code @@ -339,8 +337,7 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py index b7e63ca..25b79ab 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py @@ -456,8 +456,6 @@ security_param = "true" if security_enabled else "false" hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create hdfs directory we need to call params.HdfsResource in code @@ -473,8 +471,7 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/MAHOUT/1.0.0.2.3/package/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/MAHOUT/1.0.0.2.3/package/scripts/params.py b/ambari-server/src/main/resources/common-services/MAHOUT/1.0.0.2.3/package/scripts/params.py index 0ca3b10..555570e 100644 --- a/ambari-server/src/main/resources/common-services/MAHOUT/1.0.0.2.3/package/scripts/params.py +++ b/ambari-server/src/main/resources/common-services/MAHOUT/1.0.0.2.3/package/scripts/params.py @@ -75,8 +75,6 @@ log4j_props = config['configurations']['mahout-log4j']['content'] hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code @@ -92,6 +90,5 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py index 35751ac..269b602 100644 --- a/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/OOZIE/4.0.0.2.0/package/scripts/params_linux.py @@ -260,9 +260,6 @@ hdfs_principal_name = config['configurations']['hadoop-env']['hdfs_principal_nam hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] - -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code @@ -278,8 +275,7 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) is_webhdfs_enabled = config['configurations']['hdfs-site']['dfs.webhdfs.enabled'] http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/params_linux.py index 28bb6adf..aae6a3b 100644 --- a/ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/PIG/0.12.0.2.0/package/scripts/params_linux.py @@ -76,8 +76,6 @@ log4j_props = config['configurations']['pig-log4j']['content'] hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create hdfs directory we need to call params.HdfsResource in code @@ -93,7 +91,6 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py b/ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py index dc90c68..19666d8 100644 --- a/ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py +++ b/ambari-server/src/main/resources/common-services/SPARK/1.2.0.2.2/package/scripts/params.py @@ -175,7 +175,6 @@ if has_spark_thriftserver and 'spark-thrift-sparkconf' in config['configurations default_fs = config['configurations']['core-site']['fs.defaultFS'] hdfs_site = config['configurations']['hdfs-site'] -dfs_type = default("/commandParams/dfs_type", "") import functools #create partial functions with common arguments for every HdfsResource call @@ -192,6 +191,5 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/params_linux.py index 10cb999..399b870 100644 --- a/ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/TEZ/0.4.0.2.1/package/scripts/params_linux.py @@ -80,8 +80,6 @@ tez_env_sh_template = config['configurations']['tez-env']['content'] hdfs_site = config['configurations']['hdfs-site'] default_fs = config['configurations']['core-site']['fs.defaultFS'] -dfs_type = default("/commandParams/dfs_type", "") - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete/copyfromlocal hdfs directories/files we need to call params.HdfsResource in code @@ -97,8 +95,7 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py b/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py index 3edfd7b..f7fafd8 100644 --- a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py +++ b/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py @@ -123,13 +123,11 @@ class MapReduce2ServiceCheckDefault(MapReduce2ServiceCheck): params.HdfsResource(output_file, action = "delete_on_execute", type = "directory", - dfs_type = params.dfs_type, ) params.HdfsResource(input_file, action = "create_on_execute", type = "file", source = "/etc/passwd", - dfs_type = params.dfs_type, ) params.HdfsResource(None, action="execute") http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py b/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py index 91fdb83..44d4e00 100644 --- a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py +++ b/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py @@ -31,6 +31,7 @@ from resource_management.libraries.functions.version import format_hdp_stack_ver from resource_management.libraries.functions.default import default from resource_management.libraries import functions + import status_params # a map of the Ambari role to the component name @@ -266,9 +267,6 @@ is_webhdfs_enabled = hdfs_site['dfs.webhdfs.enabled'] # Path to file that contains list of HDFS resources to be skipped during processing hdfs_resource_ignore_file = "/var/lib/ambari-agent/data/.hdfs_resource_ignore" -dfs_type = default("/commandParams/dfs_type", "") - - import functools #create partial functions with common arguments for every HdfsResource call #to create/delete hdfs directory/file/copyfromlocal we need to call params.HdfsResource in code @@ -284,8 +282,7 @@ HdfsResource = functools.partial( principal_name = hdfs_principal_name, hdfs_site = hdfs_site, default_fs = default_fs, - immutable_paths = get_not_managed_resources(), - dfs_type = dfs_type + immutable_paths = get_not_managed_resources() ) update_exclude_file_only = default("/commandParams/update_exclude_file_only",False) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py b/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py index 05e19cf..8f5ba2b 100644 --- a/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py +++ b/ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py @@ -187,15 +187,14 @@ def yarn(name = None): # During RU, Core Masters and Slaves need hdfs-site.xml # TODO, instead of specifying individual configs, which is susceptible to breaking when new configs are added, # RU should rely on all available in /usr/hdp//hadoop/conf - if 'hdfs-site' in params.config['configurations']: - XmlConfig("hdfs-site.xml", - conf_dir=params.hadoop_conf_dir, - configurations=params.config['configurations']['hdfs-site'], - configuration_attributes=params.config['configuration_attributes']['hdfs-site'], - owner=params.hdfs_user, - group=params.user_group, - mode=0644 - ) + XmlConfig("hdfs-site.xml", + conf_dir=params.hadoop_conf_dir, + configurations=params.config['configurations']['hdfs-site'], + configuration_attributes=params.config['configuration_attributes']['hdfs-site'], + owner=params.hdfs_user, + group=params.user_group, + mode=0644 + ) XmlConfig("mapred-site.xml", conf_dir=params.hadoop_conf_dir, http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/params.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/params.py index 68fe9f9..7039f3e 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/params.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/params.py @@ -26,9 +26,6 @@ from resource_management.libraries.functions import format_jvm_option from resource_management.libraries.functions.version import format_hdp_stack_version config = Script.get_config() - -dfs_type = default("/commandParams/dfs_type", "") - sudo = AMBARI_SUDO_BINARY stack_version_unformatted = str(config['hostLevelParams']['stack_version']) @@ -87,5 +84,5 @@ user_group = config['configurations']['cluster-env']['user_group'] namenode_host = default("/clusterHostInfo/namenode_host", []) has_namenode = not len(namenode_host) == 0 -if has_namenode or dfs_type == 'HCFS': +if has_namenode: hadoop_conf_dir = conf_select.get_hadoop_conf_dir(force_latest_on_upgrade=True) http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py index 8ee2f7a..3545c47 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/after-INSTALL/scripts/shared_initialization.py @@ -54,7 +54,7 @@ def setup_config(): else: Logger.warning("Parameter hadoop_conf_dir is missing or directory does not exist. This is expected if this host does not have any Hadoop components.") - if is_hadoop_conf_dir_present and (params.has_namenode or stackversion.find('Gluster') >= 0 or params.dfs_type == 'HCFS'): + if is_hadoop_conf_dir_present and (params.has_namenode or stackversion.find('Gluster') >= 0): # create core-site only if the hadoop config diretory exists XmlConfig("core-site.xml", conf_dir=params.hadoop_conf_dir, http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py index c34be0b..18dd49e 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/hook.py @@ -27,7 +27,7 @@ class BeforeAnyHook(Hook): env.set_params(params) setup_users() - if params.has_namenode or params.dfs_type == 'HCFS': + if params.has_namenode: setup_hadoop_env() setup_java() http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/params.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/params.py index aef9357..e8a8af6 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/params.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/params.py @@ -39,8 +39,6 @@ from ambari_commons.constants import AMBARI_SUDO_BINARY config = Script.get_config() tmp_dir = Script.get_tmp_dir() -dfs_type = default("/commandParams/dfs_type", "") - artifact_dir = format("{tmp_dir}/AMBARI-artifacts/") jdk_name = default("/hostLevelParams/jdk_name", None) java_home = config['hostLevelParams']['java_home'] @@ -185,7 +183,7 @@ has_oozie_server = not len(oozie_servers) == 0 has_falcon_server_hosts = not len(falcon_server_hosts) == 0 has_ranger_admin = not len(ranger_admin_hosts) == 0 -if has_namenode or dfs_type == 'HCFS': +if has_namenode: hadoop_conf_dir = conf_select.get_hadoop_conf_dir(force_latest_on_upgrade=True) hbase_tmp_dir = "/tmp/hbase-hbase" http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py index afdc018..76b4936 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py @@ -136,8 +136,7 @@ def set_uid(user, user_dirs): def setup_hadoop_env(): import params stackversion = params.stack_version_unformatted - Logger.info("FS Type: {0}".format(params.dfs_type)) - if params.has_namenode or stackversion.find('Gluster') >= 0 or params.dfs_type == 'HCFS': + if params.has_namenode or stackversion.find('Gluster') >= 0: if params.security_enabled: tc_owner = "root" else: http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/params.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/params.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/params.py index b0e2e7a..e713a3d 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/params.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/params.py @@ -27,8 +27,6 @@ from resource_management.libraries.functions import format from resource_management.libraries.functions.version import format_hdp_stack_version, compare_versions from ambari_commons.os_check import OSCheck from resource_management.libraries.script.script import Script -from resource_management.libraries.functions import get_kinit_path -from resource_management.libraries.resources.hdfs_resource import HdfsResource config = Script.get_config() @@ -38,11 +36,6 @@ host_sys_prepped = default("/hostLevelParams/host_sys_prepped", False) stack_version_unformatted = str(config['hostLevelParams']['stack_version']) hdp_stack_version = format_hdp_stack_version(stack_version_unformatted) -dfs_type = default("/commandParams/dfs_type", "") -hadoop_conf_dir = "/etc/hadoop/conf" - -component_list = default("/localComponents", []) - # hadoop default params mapreduce_libs_path = "/usr/lib/hadoop-mapreduce/*" @@ -135,7 +128,7 @@ metrics_collection_period = default("/configurations/ams-site/timeline.metrics.s #hadoop params -if has_namenode or dfs_type == 'HCFS': +if has_namenode: hadoop_tmp_dir = format("/tmp/hadoop-{hdfs_user}") hadoop_conf_dir = conf_select.get_hadoop_conf_dir(force_latest_on_upgrade=True) task_log4j_properties_location = os.path.join(hadoop_conf_dir, "task-log4j.properties") @@ -223,38 +216,6 @@ net_topology_script_dir = os.path.dirname(net_topology_script_file_path) net_topology_mapping_data_file_name = 'topology_mappings.data' net_topology_mapping_data_file_path = os.path.join(net_topology_script_dir, net_topology_mapping_data_file_name) -#Added logic to create /tmp and /user directory for HCFS stack. -has_core_site = 'core-site' in config['configurations'] -hdfs_user_keytab = config['configurations']['hadoop-env']['hdfs_user_keytab'] -kinit_path_local = get_kinit_path() -stack_version_unformatted = str(config['hostLevelParams']['stack_version']) -hdp_stack_version = format_hdp_stack_version(stack_version_unformatted) -hadoop_bin_dir = hdp_select.get_hadoop_dir("bin") -hdfs_principal_name = default('/configurations/hadoop-env/hdfs_principal_name', None) -hdfs_site = config['configurations']['hdfs-site'] -default_fs = config['configurations']['core-site']['fs.defaultFS'] -smoke_user = config['configurations']['cluster-env']['smokeuser'] -smoke_hdfs_user_dir = format("/user/{smoke_user}") -smoke_hdfs_user_mode = 0770 - -import functools -#create partial functions with common arguments for every HdfsResource call -#to create/delete/copyfromlocal hdfs directories/files we need to call params.HdfsResource in code -HdfsResource = functools.partial( - HdfsResource, - user=hdfs_user, - security_enabled = security_enabled, - keytab = hdfs_user_keytab, - kinit_path_local = kinit_path_local, - hadoop_bin_dir = hadoop_bin_dir, - hadoop_conf_dir = hadoop_conf_dir, - principal_name = hdfs_principal_name, - hdfs_site = hdfs_site, - default_fs = default_fs, - dfs_type = dfs_type -) - - ##### Namenode RPC ports - metrics config section start ##### # Figure out the rpc ports for current namenode http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py index 66b6833..ac9071e 100644 --- a/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py +++ b/ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-START/scripts/shared_initialization.py @@ -35,7 +35,7 @@ def setup_hadoop(): ) #directories - if params.has_namenode or params.dfs_type == 'HCFS': + if params.has_namenode: Directory(params.hdfs_log_dir_prefix, recursive=True, owner='root', @@ -43,13 +43,12 @@ def setup_hadoop(): mode=0775, cd_access='a', ) - if params.has_namenode: - Directory(params.hadoop_pid_dir_prefix, + Directory(params.hadoop_pid_dir_prefix, recursive=True, owner='root', group='root', cd_access='a', - ) + ) Directory(params.hadoop_tmp_dir, recursive=True, owner=params.hdfs_user, @@ -64,7 +63,7 @@ def setup_hadoop(): # if WebHDFS is not enabled we need this jar to create hadoop folders. if params.host_sys_prepped: print "Skipping copying of fast-hdfs-resource.jar as host is sys prepped" - elif params.dfs_type == 'HCFS' or not WebHDFSUtil.is_webhdfs_available(params.is_webhdfs_enabled, params.default_fs): + elif not WebHDFSUtil.is_webhdfs_available(params.is_webhdfs_enabled, params.default_fs): # for source-code of jar goto contrib/fast-hdfs-resource File(format("{ambari_libs_dir}/fast-hdfs-resource.jar"), mode=0644, @@ -104,9 +103,6 @@ def setup_hadoop(): content=Template("hadoop-metrics2.properties.j2") ) - if params.dfs_type == 'HCFS' and params.has_core_site and 'ECS_CLIENT' in params.component_list: - create_dirs() - def setup_configs(): """ @@ -114,7 +110,7 @@ def setup_configs(): """ import params - if params.has_namenode or params.dfs_type == 'HCFS': + if params.has_namenode: if os.path.exists(params.hadoop_conf_dir): File(params.task_log4j_properties_location, content=StaticFile("task-log4j.properties"), @@ -155,21 +151,3 @@ def create_javahome_symlink(): to="/usr/jdk64/jdk1.6.0_31", ) -def create_dirs(): - import params - params.HdfsResource("/tmp", - type="directory", - action="create_on_execute", - owner=params.hdfs_user, - mode=0777 - ) - params.HdfsResource(params.smoke_hdfs_user_dir, - type="directory", - action="create_on_execute", - owner=params.smoke_user, - mode=params.smoke_hdfs_user_mode - ) - params.HdfsResource(None, - action="execute" - ) - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/metainfo.xml deleted file mode 100644 index 05df949..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/metainfo.xml +++ /dev/null @@ -1,23 +0,0 @@ - - - - - false - - 2.3 - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/repos/repoinfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/repos/repoinfo.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/repos/repoinfo.xml deleted file mode 100644 index b44cca5..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/repos/repoinfo.xml +++ /dev/null @@ -1,122 +0,0 @@ - - - - http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json - - - http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.0.0 - HDP-2.3 - HDP - - - http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6 - HDP-UTILS-1.1.0.20 - HDP-UTILS - - - http://ECS_CLIENT_REPO/ - ECS-2.2.0.0 - ECS - - - - - http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.3.0.0 - HDP-2.3 - HDP - - - http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7 - HDP-UTILS-1.1.0.20 - HDP-UTILS - - - http://ECS_CLIENT_REPO/ - ECS-2.2.0.0 - ECS - - - - - http://public-repo-1.hortonworks.com/HDP/suse11sp3/2.x/updates/2.3.0.0 - HDP-2.3 - HDP - - - http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/suse11sp3 - HDP-UTILS-1.1.0.20 - HDP-UTILS - - - http://ECS_CLIENT_REPO/ - ECS-2.2.0.0 - ECS - - - - - http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x/updates/2.3.0.0 - HDP-2.3 - HDP - - - http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu12 - HDP-UTILS-1.1.0.20 - HDP-UTILS - - - http://ECS_CLIENT_REPO/ - ECS-2.2.0.0 - ECS - - - - - http://public-repo-1.hortonworks.com/HDP/debian7/2.x/updates/2.3.0.0 - HDP-2.3 - HDP - - - http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/debian6 - HDP-UTILS-1.1.0.20 - HDP-UTILS - - - http://ECS_CLIENT_REPO/ - ECS-2.2.0.0 - ECS - - - - - http://public-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.3.0.0 - HDP-2.3 - HDP - - - http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu12 - HDP-UTILS-1.1.0.20 - HDP-UTILS - - - http://ECS_CLIENT_REPO/ - ECS-2.2.0.0 - ECS - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/role_command_order.json ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/role_command_order.json b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/role_command_order.json deleted file mode 100644 index 08cb729..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/role_command_order.json +++ /dev/null @@ -1,10 +0,0 @@ -{ - "_comment" : "Record format:", - "_comment" : "blockedRole-blockedCommand: [blockerRole1-blockerCommand1, blockerRole2-blockerCommand2, ...]", - "general_deps" : { - "_comment" : "dependencies for all cases", - "ECS-SERVICE_CHECK": ["ECS-INSTALL"], - "RESOURCEMANAGER-START": ["ZOOKEEPER_SERVER-START"] - } -} - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ACCUMULO/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ACCUMULO/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ACCUMULO/metainfo.xml deleted file mode 100644 index 79d2d02..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ACCUMULO/metainfo.xml +++ /dev/null @@ -1,28 +0,0 @@ - - - - 2.0 - - - ACCUMULO - common-services/ACCUMULO/1.6.1.2.2.0 - 1.7.0.2.3 - true - - - http://git-wip-us.apache.org/repos/asf/ambari/blob/f189015a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ATLAS/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ATLAS/metainfo.xml b/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ATLAS/metainfo.xml deleted file mode 100644 index 1b4c570..0000000 --- a/ambari-server/src/main/resources/stacks/HDP/2.3.ECS/services/ATLAS/metainfo.xml +++ /dev/null @@ -1,28 +0,0 @@ - - - - 2.0 - - - ATLAS - common-services/ATLAS/0.1.0.2.3 - 0.5.0.2.3 - true - - -