bigtop-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ofle...@apache.org
Subject [24/51] [abbrv] bigtop git commit: BIGTOP-2468: Add Juju hadoop-processing bundle
Date Fri, 29 Jul 2016 17:03:05 GMT
BIGTOP-2468: Add Juju hadoop-processing bundle

Signed-off-by: Konstantin Boudnik <cos@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/bigtop/repo
Commit: http://git-wip-us.apache.org/repos/asf/bigtop/commit/c04e3d43
Tree: http://git-wip-us.apache.org/repos/asf/bigtop/tree/c04e3d43
Diff: http://git-wip-us.apache.org/repos/asf/bigtop/diff/c04e3d43

Branch: refs/heads/BIGTOP-2253
Commit: c04e3d43bf38dc9e31e60666903b010220b63114
Parents: d639645
Author: Kevin W Monroe <kevin.monroe@canonical.com>
Authored: Fri Jun 3 19:21:26 2016 +0000
Committer: Konstantin Boudnik <cos@apache.org>
Committed: Fri Jun 3 15:35:04 2016 -0700

----------------------------------------------------------------------
 bigtop-deploy/juju/hadoop-processing/.gitignore |   2 +
 bigtop-deploy/juju/hadoop-processing/README.md  | 194 +++++++++++++++++++
 .../juju/hadoop-processing/bundle-dev.yaml      |  68 +++++++
 .../juju/hadoop-processing/bundle-local.yaml    |  68 +++++++
 .../juju/hadoop-processing/bundle.yaml          |  68 +++++++
 bigtop-deploy/juju/hadoop-processing/copyright  |  16 ++
 .../juju/hadoop-processing/tests/01-bundle.py   |  96 +++++++++
 .../juju/hadoop-processing/tests/tests.yaml     |   4 +
 build.gradle                                    |   1 +
 pom.xml                                         |   1 +
 10 files changed, 518 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/.gitignore
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/.gitignore b/bigtop-deploy/juju/hadoop-processing/.gitignore
new file mode 100644
index 0000000..a295864
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/.gitignore
@@ -0,0 +1,2 @@
+*.pyc
+__pycache__

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/README.md
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/README.md b/bigtop-deploy/juju/hadoop-processing/README.md
new file mode 100644
index 0000000..6942577
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/README.md
@@ -0,0 +1,194 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+## Overview
+
+The Apache Hadoop software library is a framework that allows for the
+distributed processing of large data sets across clusters of computers
+using a simple programming model.
+
+It is designed to scale up from single servers to thousands of machines,
+each offering local computation and storage. Rather than rely on hardware
+to deliver high-avaiability, the library itself is designed to detect
+and handle failures at the application layer, so delivering a
+highly-availabile service on top of a cluster of computers, each of
+which may be prone to failures.
+
+This bundle provides a complete deployment of the core components of the
+[Apache Bigtop](http://bigtop.apache.org/)
+platform to perform distributed data analytics at scale.  These components
+include:
+
+  * NameNode (HDFS)
+  * ResourceManager (YARN)
+  * Slaves (DataNode and NodeManager)
+  * Client (Bigtop hadoop client)
+    * Plugin (subordinate cluster facilitator)
+
+Deploying this bundle gives you a fully configured and connected Apache Bigtop
+cluster on any supported cloud, which can be easily scaled to meet workload
+demands.
+
+
+## Deploying this bundle
+
+In this deployment, the aforementioned components are deployed on separate
+units. To deploy this bundle, simply use:
+
+    juju deploy hadoop-processing
+
+This will deploy this bundle and all the charms from the [charm store][].
+
+> Note: With Juju versions < 2.0, you will need to use [juju-deployer][] to
+deploy the bundle.
+
+You can also build all of the charms from their source layers in the
+[Bigtop repository][].  See the [charm package README][] for instructions
+to build and deploy the charms.
+
+The default bundle deploys three slave nodes and one node of each of
+the other services. To scale the cluster, use:
+
+    juju add-unit slave -n 2
+
+This will add two additional slave nodes, for a total of five.
+
+[charm store]: https://jujucharms.com/
+[Bigtop repository]: https://github.com/apache/bigtop
+[charm package README]: ../../../bigtop-packages/src/charm/README.md
+[juju-deployer]: https://pypi.python.org/pypi/juju-deployer/
+
+
+## Status and Smoke Test
+
+The services provide extended status reporting to indicate when they are ready:
+
+    juju status --format=tabular
+
+This is particularly useful when combined with `watch` to track the on-going
+progress of the deployment:
+
+    watch -n 0.5 juju status --format=tabular
+
+The charms for each master component (namenode, resourcemanager)
+also each provide a `smoke-test` action that can be used to verify that each
+component is functioning as expected.  You can run them all and then watch the
+action status list:
+
+    juju action do namenode/0 smoke-test
+    juju action do resourcemanager/0 smoke-test
+    watch -n 0.5 juju action status
+
+Eventually, all of the actions should settle to `status: completed`.  If
+any go instead to `status: failed` then it means that component is not working
+as expected.  You can get more information about that component's smoke test:
+
+    juju action fetch <action-id>
+
+
+## Monitoring
+
+This bundle includes Ganglia for system-level monitoring of the namenode,
+resourcemanager, and slave units. Metrics are sent to a central
+ganglia unit for easy viewing in a browser. To view the ganglia web interface,
+first expose the service:
+
+    juju expose ganglia
+
+Now find the ganglia public IP address:
+
+    juju status ganglia
+
+The ganglia web interface will be available at:
+
+    http://GANGLIA_PUBLIC_IP/ganglia
+
+
+## Benchmarking
+
+This charm provides several benchmarks to gauge the performance of your
+environment.
+
+The easiest way to run the benchmarks on this service is to relate it to the
+[Benchmark GUI][].  You will likely also want to relate it to the
+[Benchmark Collector][] to have machine-level information collected during the
+benchmark, for a more complete picture of how the machine performed.
+
+[Benchmark GUI]: https://jujucharms.com/benchmark-gui/
+[Benchmark Collector]: https://jujucharms.com/benchmark-collector/
+
+However, each benchmark is also an action that can be called manually:
+
+        $ juju action do resourcemanager/0 nnbench
+        Action queued with id: 55887b40-116c-4020-8b35-1e28a54cc622
+        $ juju action fetch --wait 0 55887b40-116c-4020-8b35-1e28a54cc622
+
+        results:
+          meta:
+            composite:
+              direction: asc
+              units: secs
+              value: "128"
+            start: 2016-02-04T14:55:39Z
+            stop: 2016-02-04T14:57:47Z
+          results:
+            raw: '{"BAD_ID": "0", "FILE: Number of read operations": "0", "Reduce input groups":
+              "8", "Reduce input records": "95", "Map output bytes": "1823", "Map input records":
+              "12", "Combine input records": "0", "HDFS: Number of bytes read": "18635",
"FILE:
+              Number of bytes written": "32999982", "HDFS: Number of write operations": "330",
+              "Combine output records": "0", "Total committed heap usage (bytes)": "3144749056",
+              "Bytes Written": "164", "WRONG_LENGTH": "0", "Failed Shuffles": "0", "FILE:
+              Number of bytes read": "27879457", "WRONG_MAP": "0", "Spilled Records": "190",
+              "Merged Map outputs": "72", "HDFS: Number of large read operations": "0", "Reduce
+              shuffle bytes": "2445", "FILE: Number of large read operations": "0", "Map
output
+              materialized bytes": "2445", "IO_ERROR": "0", "CONNECTION": "0", "HDFS: Number
+              of read operations": "567", "Map output records": "95", "Reduce output records":
+              "8", "WRONG_REDUCE": "0", "HDFS: Number of bytes written": "27412", "GC time
+              elapsed (ms)": "603", "Input split bytes": "1610", "Shuffled Maps ": "72",
"FILE:
+              Number of write operations": "0", "Bytes Read": "1490"}'
+        status: completed
+        timing:
+          completed: 2016-02-04 14:57:48 +0000 UTC
+          enqueued: 2016-02-04 14:55:14 +0000 UTC
+          started: 2016-02-04 14:55:27 +0000 UTC
+
+
+## Deploying in Network-Restricted Environments
+
+Charms can be deployed in environments with limited network access. To deploy
+in this environment, you will need a local mirror to serve required packages.
+
+
+### Mirroring Packages
+
+You can setup a local mirror for apt packages using squid-deb-proxy.
+For instructions on configuring juju to use this, see the
+[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).
+
+
+## Contact Information
+
+- <bigdata@lists.ubuntu.com>
+
+
+## Resources
+
+- [Apache Bigtop](http://bigtop.apache.org/) home page
+- [Apache Bigtop issue tracking](http://bigtop.apache.org/issue-tracking.html)
+- [Apache Bigtop mailing lists](http://bigtop.apache.org/mail-lists.html)
+- [Juju Bigtop charms](https://jujucharms.com/q/apache/bigtop)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)
+- [Juju community](https://jujucharms.com/community)

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml b/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
new file mode 100644
index 0000000..abc1851
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/bundle-dev.yaml
@@ -0,0 +1,68 @@
+services:
+  openjdk:
+    charm: cs:trusty/openjdk
+    annotations:
+      gui-x: "500"
+      gui-y: "400"
+    options:
+      java-type: "jdk"
+      java-major: "8"
+  namenode:
+    charm: cs:~bigdata-dev/trusty/hadoop-namenode
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "800"
+    constraints: mem=7G
+  resourcemanager:
+    charm: cs:~bigdata-dev/trusty/hadoop-resourcemanager
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "0"
+    constraints: mem=7G
+  slave:
+    charm: cs:~bigdata-dev/trusty/hadoop-slave
+    num_units: 3
+    annotations:
+      gui-x: "0"
+      gui-y: "400"
+    constraints: mem=7G
+  plugin:
+    charm: cs:~bigdata-dev/trusty/hadoop-plugin
+    annotations:
+      gui-x: "1000"
+      gui-y: "400"
+  client:
+    charm: cs:trusty/hadoop-client
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "400"
+  ganglia-node:
+    charm: cs:trusty/ganglia-node
+    annotations:
+      gui-x: "250"
+      gui-y: "400"
+  ganglia:
+    charm: cs:trusty/ganglia
+    num_units: 1
+    annotations:
+      gui-x: "750"
+      gui-y: "400"
+series: trusty
+relations:
+  - [openjdk, namenode]
+  - [openjdk, resourcemanager]
+  - [openjdk, slave]
+  - [openjdk, client]
+  - [resourcemanager, namenode]
+  - [namenode, slave]
+  - [resourcemanager, slave]
+  - [plugin, namenode]
+  - [plugin, resourcemanager]
+  - [client, plugin]
+  - ["ganglia:node", ganglia-node]
+  - [ganglia-node, namenode]
+  - [ganglia-node, resourcemanager]
+  - [ganglia-node, slave]

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml b/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
new file mode 100644
index 0000000..3947f82
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/bundle-local.yaml
@@ -0,0 +1,68 @@
+services:
+  openjdk:
+    charm: cs:trusty/openjdk
+    annotations:
+      gui-x: "500"
+      gui-y: "400"
+    options:
+      java-type: "jdk"
+      java-major: "8"
+  namenode:
+    charm: local:trusty/hadoop-namenode
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "800"
+    constraints: mem=7G
+  resourcemanager:
+    charm: local:trusty/hadoop-resourcemanager
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "0"
+    constraints: mem=7G
+  slave:
+    charm: local:trusty/hadoop-slave
+    num_units: 3
+    annotations:
+      gui-x: "0"
+      gui-y: "400"
+    constraints: mem=7G
+  plugin:
+    charm: local:trusty/hadoop-plugin
+    annotations:
+      gui-x: "1000"
+      gui-y: "400"
+  client:
+    charm: cs:trusty/hadoop-client
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "400"
+  ganglia-node:
+    charm: cs:trusty/ganglia-node
+    annotations:
+      gui-x: "250"
+      gui-y: "400"
+  ganglia:
+    charm: cs:trusty/ganglia
+    num_units: 1
+    annotations:
+      gui-x: "750"
+      gui-y: "400"
+series: trusty
+relations:
+  - [openjdk, namenode]
+  - [openjdk, resourcemanager]
+  - [openjdk, slave]
+  - [openjdk, client]
+  - [resourcemanager, namenode]
+  - [namenode, slave]
+  - [resourcemanager, slave]
+  - [plugin, namenode]
+  - [plugin, resourcemanager]
+  - [client, plugin]
+  - ["ganglia:node", ganglia-node]
+  - [ganglia-node, namenode]
+  - [ganglia-node, resourcemanager]
+  - [ganglia-node, slave]

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/bundle.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/bundle.yaml b/bigtop-deploy/juju/hadoop-processing/bundle.yaml
new file mode 100644
index 0000000..dcc5bd9
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/bundle.yaml
@@ -0,0 +1,68 @@
+services:
+  openjdk:
+    charm: cs:trusty/openjdk-1
+    annotations:
+      gui-x: "500"
+      gui-y: "400"
+    options:
+      java-type: "jdk"
+      java-major: "8"
+  namenode:
+    charm: cs:trusty/hadoop-namenode-3
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "800"
+    constraints: mem=7G
+  resourcemanager:
+    charm: cs:trusty/hadoop-resourcemanager-3
+    num_units: 1
+    annotations:
+      gui-x: "500"
+      gui-y: "0"
+    constraints: mem=7G
+  slave:
+    charm: cs:trusty/hadoop-slave-4
+    num_units: 3
+    annotations:
+      gui-x: "0"
+      gui-y: "400"
+    constraints: mem=7G
+  plugin:
+    charm: cs:trusty/hadoop-plugin-3
+    annotations:
+      gui-x: "1000"
+      gui-y: "400"
+  client:
+    charm: cs:trusty/hadoop-client-4
+    num_units: 1
+    annotations:
+      gui-x: "1250"
+      gui-y: "400"
+  ganglia-node:
+    charm: cs:trusty/ganglia-node-2
+    annotations:
+      gui-x: "250"
+      gui-y: "400"
+  ganglia:
+    charm: cs:trusty/ganglia-2
+    num_units: 1
+    annotations:
+      gui-x: "750"
+      gui-y: "400"
+series: trusty
+relations:
+  - [openjdk, namenode]
+  - [openjdk, resourcemanager]
+  - [openjdk, slave]
+  - [openjdk, client]
+  - [resourcemanager, namenode]
+  - [namenode, slave]
+  - [resourcemanager, slave]
+  - [plugin, namenode]
+  - [plugin, resourcemanager]
+  - [client, plugin]
+  - ["ganglia:node", ganglia-node]
+  - [ganglia-node, namenode]
+  - [ganglia-node, resourcemanager]
+  - [ganglia-node, slave]

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/copyright
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/copyright b/bigtop-deploy/juju/hadoop-processing/copyright
new file mode 100644
index 0000000..e900b97
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/copyright
@@ -0,0 +1,16 @@
+Format: http://dep.debian.net/deps/dep5/
+
+Files: *
+Copyright: Copyright 2015, Canonical Ltd., All Rights Reserved.
+License: Apache License 2.0
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ .
+     http://www.apache.org/licenses/LICENSE-2.0
+ .
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py b/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
new file mode 100755
index 0000000..176ff74
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/tests/01-bundle.py
@@ -0,0 +1,96 @@
+#!/usr/bin/env python3
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import unittest
+
+import yaml
+import amulet
+
+
+class TestBundle(unittest.TestCase):
+    bundle_file = os.path.join(os.path.dirname(__file__), '..', 'bundle.yaml')
+
+    @classmethod
+    def setUpClass(cls):
+        # classmethod inheritance doesn't work quite right with
+        # setUpClass / tearDownClass, so subclasses have to manually call this
+        cls.d = amulet.Deployment(series='trusty')
+        with open(cls.bundle_file) as f:
+            bun = f.read()
+        bundle = yaml.safe_load(bun)
+        cls.d.load(bundle)
+        cls.d.setup(timeout=3600)
+        cls.d.sentry.wait_for_messages({'client': 'Ready'}, timeout=3600)
+        cls.hdfs = cls.d.sentry['namenode'][0]
+        cls.yarn = cls.d.sentry['resourcemanager'][0]
+        cls.slave = cls.d.sentry['slave'][0]
+        cls.client = cls.d.sentry['client'][0]
+
+    def test_components(self):
+        """
+        Confirm that all of the required components are up and running.
+        """
+        hdfs, retcode = self.hdfs.run("pgrep -a java")
+        yarn, retcode = self.yarn.run("pgrep -a java")
+        slave, retcode = self.slave.run("pgrep -a java")
+        client, retcode = self.client.run("pgrep -a java")
+
+        assert 'NameNode' in hdfs, "NameNode not started"
+        assert 'NameNode' not in yarn, "NameNode should not be running on resourcemanager"
+        assert 'NameNode' not in slave, "NameNode should not be running on slave"
+
+        assert 'ResourceManager' in yarn, "ResourceManager not started"
+        assert 'ResourceManager' not in hdfs, "ResourceManager should not be running on namenode"
+        assert 'ResourceManager' not in slave, "ResourceManager should not be running on
slave"
+
+        assert 'JobHistoryServer' in yarn, "JobHistoryServer not started"
+        assert 'JobHistoryServer' not in hdfs, "JobHistoryServer should not be running on
namenode"
+        assert 'JobHistoryServer' not in slave, "JobHistoryServer should not be running on
slave"
+
+        assert 'NodeManager' in slave, "NodeManager not started"
+        assert 'NodeManager' not in yarn, "NodeManager should not be running on resourcemanager"
+        assert 'NodeManager' not in hdfs, "NodeManager should not be running on namenode"
+
+        assert 'DataNode' in slave, "DataServer not started"
+        assert 'DataNode' not in yarn, "DataNode should not be running on resourcemanager"
+        assert 'DataNode' not in hdfs, "DataNode should not be running on namenode"
+
+    def test_hdfs(self):
+        """Smoke test validates mkdir, ls, chmod, and rm on the hdfs cluster."""
+        unit_name = self.hdfs.info['unit_name']
+        uuid = self.d.action_do(unit_name, 'smoke-test')
+        result = self.d.action_fetch(uuid)
+        # hdfs smoke-test sets outcome=success on success
+        if (result['outcome'] != "success"):
+            error = "HDFS smoke-test failed"
+            amulet.raise_status(amulet.FAIL, msg=error)
+
+    def test_yarn(self):
+        """Smoke test validates teragen/terasort."""
+        unit_name = self.yarn.info['unit_name']
+        uuid = self.d.action_do(unit_name, 'smoke-test')
+        result = self.d.action_fetch(uuid)
+        # yarn smoke-test only returns results on failure; if result is not
+        # empty, the test has failed and has a 'log' key
+        if result:
+            error = "YARN smoke-test failed: %s" % result['log']
+            amulet.raise_status(amulet.FAIL, msg=error)
+
+
+if __name__ == '__main__':
+    unittest.main()

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/bigtop-deploy/juju/hadoop-processing/tests/tests.yaml
----------------------------------------------------------------------
diff --git a/bigtop-deploy/juju/hadoop-processing/tests/tests.yaml b/bigtop-deploy/juju/hadoop-processing/tests/tests.yaml
new file mode 100644
index 0000000..8a4cf6f
--- /dev/null
+++ b/bigtop-deploy/juju/hadoop-processing/tests/tests.yaml
@@ -0,0 +1,4 @@
+reset: false
+packages:
+  - amulet
+  - python3-yaml

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/build.gradle
----------------------------------------------------------------------
diff --git a/build.gradle b/build.gradle
index 9af7a43..50b3227 100644
--- a/build.gradle
+++ b/build.gradle
@@ -116,6 +116,7 @@ rat {
        /* Juju charm files with rigid structure */
        "bigtop-packages/src/charm/**/wheelhouse.txt",
        "bigtop-packages/src/charm/**/*.yaml",
+       "bigtop-deploy/juju/**/*.yaml",
        /* Misc individual files */
        "src/site/resources/bigtop.rdf",
        "src/site/resources/images/bigtop-logo.ai",

http://git-wip-us.apache.org/repos/asf/bigtop/blob/c04e3d43/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 97098bb..deadd75 100644
--- a/pom.xml
+++ b/pom.xml
@@ -340,6 +340,7 @@
               <!-- Juju charm files with rigid structure -->
               <exclude>bigtop-packages/src/charm/**/wheelhouse.txt</exclude>
               <exclude>bigtop-packages/src/charm/**/*.yaml</exclude>
+              <exclude>bigtop-deploy/juju/**/*.yaml</exclude>
               <!-- Miscelaneous individual files -->
               <exclude>src/site/resources/bigtop.rdf</exclude>
               <exclude>src/site/resources/images/bigtop-logo.ai</exclude>


Mime
View raw message