Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 69B53200C80 for ; Wed, 10 May 2017 15:37:29 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 683EF160BCC; Wed, 10 May 2017 13:37:29 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D0014160BD3 for ; Wed, 10 May 2017 15:37:27 +0200 (CEST) Received: (qmail 7715 invoked by uid 500); 10 May 2017 13:37:27 -0000 Mailing-List: contact commits-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ambari-dev@ambari.apache.org Delivered-To: mailing list commits@ambari.apache.org Received: (qmail 6571 invoked by uid 99); 10 May 2017 13:37:26 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 10 May 2017 13:37:26 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 20056E967D; Wed, 10 May 2017 13:37:25 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: jonathanhurley@apache.org To: commits@ambari.apache.org Date: Wed, 10 May 2017 13:37:38 -0000 Message-Id: <1f42b36f77b54608880b4c39915afec9@git.apache.org> In-Reply-To: <3c75c0b18c8d4583801af4b6ff76fca9@git.apache.org> References: <3c75c0b18c8d4583801af4b6ff76fca9@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [15/22] ambari git commit: AMBARI-20891 - Allow extensions to auto-link with supported stack versions archived-at: Wed, 10 May 2017 13:37:29 -0000 http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hbase-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hbase-site.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hbase-site.xml new file mode 100644 index 0000000..5024e85 --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hbase-site.xml @@ -0,0 +1,137 @@ + + + + + + hbase.regionserver.msginterval + 1000 + Interval between messages from the RegionServer to HMaster + in milliseconds. Default is 15. Set this value low if you want unit + tests to be responsive. + + + + hbase.client.pause + 5000 + General client pause value. Used mostly as value to wait + before running a retry of a failed get, region lookup, etc. + + + hbase.master.meta.thread.rescanfrequency + 10000 + How long the HMaster sleeps (in milliseconds) between scans of + the root and meta tables. + + + + hbase.server.thread.wakefrequency + 1000 + Time to sleep in between searches for work (in milliseconds). + Used as sleep interval by service threads such as META scanner and log roller. + + + + hbase.regionserver.handler.count + 5 + Count of RPC Server instances spun up on RegionServers + Same property is used by the HMaster for count of master handlers. + Default is 10. + + + + hbase.master.lease.period + 6000 + Length of time the master will wait before timing out a region + server lease. Since region servers report in every second (see above), this + value has been reduced so that the master will notice a dead region server + sooner. The default is 30 seconds. + + + + hbase.master.info.port + -1 + The port for the hbase master web UI + Set to -1 if you do not want the info server to run. + + + + hbase.regionserver.info.port + -1 + The port for the hbase regionserver web UI + Set to -1 if you do not want the info server to run. + + + + hbase.regionserver.info.port.auto + true + Info server auto port bind. Enables automatic port + search if hbase.regionserver.info.port is already in use. + Enabled for testing to run multiple tests on one machine. + + + + hbase.master.lease.thread.wakefrequency + 3000 + The interval between checks for expired region server leases. + This value has been reduced due to the other reduced values above so that + the master will notice a dead region server sooner. The default is 15 seconds. + + + + hbase.regionserver.optionalcacheflushinterval + 10000 + + Amount of time to wait since the last time a region was flushed before + invoking an optional cache flush. Default 60,000. + + + + hbase.regionserver.safemode + false + + Turn on/off safe mode in region server. Always on for production, always off + for tests. + + + + hbase.hregion.max.filesize + 67108864 + + Maximum desired file size for an HRegion. If filesize exceeds + value + (value / 2), the HRegion is split in two. Default: 256M. + + Keep the maximum filesize small so we split more often in tests. + + + + hadoop.log.dir + ${user.dir}/../logs + + + hbase.zookeeper.property.clientPort + 21818 + Property from ZooKeeper's config zoo.cfg. + The port at which the clients will connect. + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-log4j.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-log4j.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-log4j.xml new file mode 100644 index 0000000..649472d --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-log4j.xml @@ -0,0 +1,199 @@ + + + + + + + + content + +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# + + +# Define some default values that can be overridden by system properties +hadoop.root.logger=INFO,console +hadoop.log.dir=. +hadoop.log.file=hadoop.log + + +# Define the root logger to the system property "hadoop.root.logger". +log4j.rootLogger=${hadoop.root.logger}, EventCounter + +# Logging Threshold +log4j.threshhold=ALL + +# +# Daily Rolling File Appender +# + +log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file} + +# Rollver at midnight +log4j.appender.DRFA.DatePattern=.yyyy-MM-dd + +# 30-day backup +#log4j.appender.DRFA.MaxBackupIndex=30 +log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout + +# Pattern format: Date LogLevel LoggerName LogMessage +log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +# Debugging Pattern format +#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n + + +# +# console +# Add "console" to rootlogger above if you want to use this +# + +log4j.appender.console=org.apache.log4j.ConsoleAppender +log4j.appender.console.target=System.err +log4j.appender.console.layout=org.apache.log4j.PatternLayout +log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n + +# +# TaskLog Appender +# + +#Default values +hadoop.tasklog.taskid=null +hadoop.tasklog.iscleanup=false +hadoop.tasklog.noKeepSplits=4 +hadoop.tasklog.totalLogFileSize=100 +hadoop.tasklog.purgeLogSplits=true +hadoop.tasklog.logsRetainHours=12 + +log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender +log4j.appender.TLA.taskId=${hadoop.tasklog.taskid} +log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup} +log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize} + +log4j.appender.TLA.layout=org.apache.log4j.PatternLayout +log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n + +# +#Security audit appender +# +hadoop.security.logger=INFO,console +hadoop.security.log.maxfilesize=256MB +hadoop.security.log.maxbackupindex=20 +log4j.category.SecurityLogger=${hadoop.security.logger} +hadoop.security.log.file=SecurityAuth.audit +log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} +log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout +log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd + +log4j.appender.RFAS=org.apache.log4j.RollingFileAppender +log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file} +log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout +log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n +log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize} +log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex} + +# +# hdfs audit logging +# +hdfs.audit.logger=INFO,console +log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} +log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false +log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender +log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log +log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout +log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n +log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd + +# +# mapred audit logging +# +mapred.audit.logger=INFO,console +log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger} +log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false +log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender +log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log +log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout +log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n +log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd + +# +# Rolling File Appender +# + +log4j.appender.RFA=org.apache.log4j.RollingFileAppender +log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file} + +# Logfile size and and 30-day backups +log4j.appender.RFA.MaxFileSize=256MB +log4j.appender.RFA.MaxBackupIndex=10 + +log4j.appender.RFA.layout=org.apache.log4j.PatternLayout +log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n +log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n + + +# Custom Logging levels + +hadoop.metrics.log.level=INFO +#log4j.logger.org.apache.hadoop.mapred.JobTracker=DEBUG +#log4j.logger.org.apache.hadoop.mapred.TaskTracker=DEBUG +#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG +log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level} + +# Jets3t library +log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR + +# +# Null Appender +# Trap security logger on the hadoop client side +# +log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender + +# +# Event Counter Appender +# Sends counts of logging messages at different severity levels to Hadoop Metrics. +# +log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter + +# Removes "deprecated" messages +log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-site.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-site.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-site.xml new file mode 100644 index 0000000..2b979d7 --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/configuration/hdfs-site.xml @@ -0,0 +1,396 @@ + + + + + + + + + + + + + dfs.name.dir + + /mnt/hmc/hadoop/hdfs/namenode + Determines where on the local filesystem the DFS name node + should store the name table. If this is a comma-delimited list + of directories then the name table is replicated in all of the + directories, for redundancy. + true + + + + dfs.support.append + true + to enable dfs append + true + + + + dfs.webhdfs.enabled + false + to enable webhdfs + true + + + + dfs.datanode.failed.volumes.tolerated + 0 + #of failed disks dn would tolerate + true + + + + dfs.block.local-path-access.user + hbase + the user who is allowed to perform short + circuit reads. + + true + + + + dfs.data.dir + /mnt/hmc/hadoop/hdfs/data + Determines where on the local filesystem an DFS data node + should store its blocks. If this is a comma-delimited + list of directories, then data will be stored in all named + directories, typically on different devices. + Directories that do not exist are ignored. + + true + + + + dfs.hosts.exclude + /etc/hadoop/conf/dfs.exclude + Names a file that contains a list of hosts that are + not permitted to connect to the namenode. The full pathname of the + file must be specified. If the value is empty, no hosts are + excluded. + + + + dfs.hosts + /etc/hadoop/conf/dfs.include + Names a file that contains a list of hosts that are + permitted to connect to the namenode. The full pathname of the file + must be specified. If the value is empty, all hosts are + permitted. + + + + dfs.replication.max + 50 + Maximal block replication. + + + + + dfs.replication + 3 + Default block replication. + + + + + dfs.heartbeat.interval + 3 + Determines datanode heartbeat interval in seconds. + + + + dfs.safemode.threshold.pct + 1.0f + + Specifies the percentage of blocks that should satisfy + the minimal replication requirement defined by dfs.replication.min. + Values less than or equal to 0 mean not to start in safe mode. + Values greater than 1 will make safe mode permanent. + + + + + dfs.balance.bandwidthPerSec + 6250000 + + Specifies the maximum amount of bandwidth that each datanode + can utilize for the balancing purpose in term of + the number of bytes per second. + + + + + dfs.datanode.address + 0.0.0.0:50010 + + + + dfs.datanode.http.address + 0.0.0.0:50075 + + + + dfs.block.size + 134217728 + The default block size for new files. + + + + dfs.http.address + hdp1.cybervisiontech.com.ua:50070 +The name of the default file system. Either the +literal string "local" or a host:port for HDFS. +true + + + +dfs.datanode.du.reserved + +1073741824 +Reserved space in bytes per volume. Always leave this much space free for non dfs use. + + + + +dfs.datanode.ipc.address +0.0.0.0:8010 + +The datanode ipc server address and port. +If the port is 0 then the server will start on a free port. + + + + +dfs.blockreport.initialDelay +120 +Delay for first block report in seconds. + + + +dfs.namenode.handler.count +40 +The number of server threads for the namenode. + + + +dfs.datanode.max.xcievers +1024 +PRIVATE CONFIG VARIABLE + + + + + +dfs.umaskmode +077 + +The octal umask used when creating files and directories. + + + + +dfs.web.ugi + +gopher,gopher +The user account used by the web interface. +Syntax: USERNAME,GROUP1,GROUP2, ... + + + + +dfs.permissions +true + +If "true", enable permission checking in HDFS. +If "false", permission checking is turned off, +but all other behavior is unchanged. +Switching from one parameter value to the other does not change the mode, +owner or group of files or directories. + + + + +dfs.permissions.supergroup +hdfs +The name of the group of super-users. + + + +dfs.namenode.handler.count +100 +Added to grow Queue size so that more client connections are allowed + + + +ipc.server.max.response.size +5242880 + + +dfs.block.access.token.enable +true + +If "true", access tokens are used as capabilities for accessing datanodes. +If "false", no access tokens are checked on accessing datanodes. + + + + +dfs.namenode.kerberos.principal +nn/_HOST@ + +Kerberos principal name for the NameNode + + + + +dfs.secondary.namenode.kerberos.principal +nn/_HOST@ + + Kerberos principal name for the secondary NameNode. + + + + + + + dfs.namenode.kerberos.https.principal + host/_HOST@ + The Kerberos principal for the host that the NameNode runs on. + + + + + dfs.secondary.namenode.kerberos.https.principal + host/_HOST@ + The Kerberos principal for the hostthat the secondary NameNode runs on. + + + + + + dfs.secondary.http.address + hdp2.cybervisiontech.com.ua:50090 + Address of secondary namenode web server + + + + dfs.secondary.https.port + 50490 + The https port where secondary-namenode binds + + + + dfs.web.authentication.kerberos.principal + HTTP/_HOST@ + + The HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. + The HTTP Kerberos principal MUST start with 'HTTP/' per Kerberos + HTTP SPENGO specification. + + + + + dfs.web.authentication.kerberos.keytab + /nn.service.keytab + + The Kerberos keytab file with the credentials for the + HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint. + + + + + dfs.datanode.kerberos.principal + dn/_HOST@ + + The Kerberos principal that the DataNode runs as. "_HOST" is replaced by the real host name. + + + + + dfs.namenode.keytab.file + /nn.service.keytab + + Combined keytab file containing the namenode service and host principals. + + + + + dfs.secondary.namenode.keytab.file + /nn.service.keytab + + Combined keytab file containing the namenode service and host principals. + + + + + dfs.datanode.keytab.file + /dn.service.keytab + + The filename of the keytab file for the DataNode. + + + + + dfs.https.port + 50470 + The https port where namenode binds + + + + + dfs.https.address + hdp1.cybervisiontech.com.ua:50470 + The https address where namenode binds + + + + + dfs.datanode.data.dir.perm + 750 +The permissions that should be there on dfs.data.dir +directories. The datanode will not come up if the permissions are +different on existing dfs.data.dir directories. If the directories +don't exist, they will be created with this permission. + + + + dfs.access.time.precision + 0 + The access time for HDFS file is precise upto this value. + The default value is 1 hour. Setting a value of 0 disables + access times for HDFS. + + + + + dfs.cluster.administrators + hdfs + ACL for who all can view the default servlets in the HDFS + + + + ipc.server.read.threadpool.size + 5 + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/metainfo.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/metainfo.xml new file mode 100644 index 0000000..da61660 --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/metainfo.xml @@ -0,0 +1,30 @@ + + + + 2.0 + + + HDFS + common-services/HDFS/1.0 + + core-site + global + hdfs-site + hadoop-policy + hdfs-log4j + + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/package/dummy-script.py ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/package/dummy-script.py b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/package/dummy-script.py new file mode 100644 index 0000000..35de4bb --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HDFS/package/dummy-script.py @@ -0,0 +1,20 @@ +""" +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +Ambari Agent + +""" http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HIVE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HIVE/metainfo.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HIVE/metainfo.xml new file mode 100644 index 0000000..9c122b2 --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/HIVE/metainfo.xml @@ -0,0 +1,26 @@ + + + + 2.0 + + + HIVE + common-services/HIVE/1.0 + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/MAPREDUCE/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/MAPREDUCE/metainfo.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/MAPREDUCE/metainfo.xml new file mode 100644 index 0000000..3b0b3d9 --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/MAPREDUCE/metainfo.xml @@ -0,0 +1,23 @@ + + + + 2.0 + + + MAPREDUCE + common-services/MAPREDUCE/1.0 + + + http://git-wip-us.apache.org/repos/asf/ambari/blob/aa78a172/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/ZOOKEEPER/metainfo.xml ---------------------------------------------------------------------- diff --git a/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/ZOOKEEPER/metainfo.xml b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/ZOOKEEPER/metainfo.xml new file mode 100644 index 0000000..9c8a299 --- /dev/null +++ b/ambari-server/src/test/resources/stacks_with_extensions/HDP/0.3/services/ZOOKEEPER/metainfo.xml @@ -0,0 +1,26 @@ + + + + 2.0 + + + ZOOKEEPER + common-services/ZOOKEEPER/1.0 + + +