Return-Path: X-Original-To: apmail-karaf-commits-archive@minotaur.apache.org Delivered-To: apmail-karaf-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9146C193D8 for ; Thu, 28 Apr 2016 15:26:45 +0000 (UTC) Received: (qmail 47407 invoked by uid 500); 28 Apr 2016 15:26:45 -0000 Delivered-To: apmail-karaf-commits-archive@karaf.apache.org Received: (qmail 47381 invoked by uid 500); 28 Apr 2016 15:26:45 -0000 Mailing-List: contact commits-help@karaf.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@karaf.apache.org Delivered-To: mailing list commits@karaf.apache.org Received: (qmail 47372 invoked by uid 99); 28 Apr 2016 15:26:45 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 28 Apr 2016 15:26:45 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id E5B11180481 for ; Thu, 28 Apr 2016 15:26:44 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.8 X-Spam-Level: * X-Spam-Status: No, score=1.8 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-0.001, WEIRD_PORT=0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id m9cgxvUEia9q for ; Thu, 28 Apr 2016 15:26:29 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 814825FCAD for ; Thu, 28 Apr 2016 15:26:29 +0000 (UTC) Received: from svn01-us-west.apache.org (svn.apache.org [10.41.0.6]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 348B4E0E74 for ; Thu, 28 Apr 2016 15:27:14 +0000 (UTC) Received: from svn01-us-west.apache.org (localhost [127.0.0.1]) by svn01-us-west.apache.org (ASF Mail Server at svn01-us-west.apache.org) with ESMTP id E6BD03A1673 for ; Thu, 28 Apr 2016 15:26:27 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1741452 [7/9] - in /karaf/site/production/manual/cellar/latest-3: ./ architecture-guide/ images/ user-guide/ Date: Thu, 28 Apr 2016 15:26:26 -0000 To: commits@karaf.apache.org From: jbonofre@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20160428152627.E6BD03A1673@svn01-us-west.apache.org> Added: karaf/site/production/manual/cellar/latest-3/nodes.html URL: http://svn.apache.org/viewvc/karaf/site/production/manual/cellar/latest-3/nodes.html?rev=1741452&view=auto ============================================================================== --- karaf/site/production/manual/cellar/latest-3/nodes.html (added) +++ karaf/site/production/manual/cellar/latest-3/nodes.html Thu Apr 28 15:26:24 2016 @@ -0,0 +1,819 @@ + + + + + + + +Cellar nodes + + + + + + +
+
+

Cellar nodes

+
+

This chapter describes the Cellar nodes manipulation commands.

+
+
+

Nodes identification

+
+

When you installed the Cellar feature, your Karaf instance became automatically a Cellar cluster node, +and hence tries to discover the others Cellar nodes.

+
+
+

You can list the known Cellar nodes using the list-nodes command:

+
+
+
+
karaf@root()> cluster:node-list
+  | Id             | Host Name | Port
+-------------------------------------
+x | node2:5702     | node2 | 5702
+  | node1:5701     | node1 | 5701
+
+
+
+

The starting 'x' indicates that it’s the Karaf instance on which you are logged on (the local node).

+
+
+ + + + + +
+
Note
+
+
+

If you don’t see the other nodes there (whereas they should be there), it’s probably due to a network issue. By default, Cellar uses multicast to discover the nodes. +If your network or network interface don’t support multicast, you have to switch to tcp-ip instead of multicast. See [Core Configuration|hazelcast] for details.

+
+
+
+
+ + + + + +
+
Note
+
+
+

In Cellar 2.3.x, Cellar used both multicast and tcp-ip by default. Due to a change in Hazelcast, it’s no more possible to have both. Now, in Cellar 3.0.x, the default +configuration is multicast enabled, tcp-ip disabled. See [Core Configuration|hazelcast] for details.

+
+
+
+
+
+

Testing nodes

+
+

You can ping a node to test it:

+
+
+
+
karaf@root()> cluster:node-ping node1:5701
+PING node1:5701
+from 1: req=node1:5701 time=11 ms
+from 2: req=node1:5701 time=12 ms
+from 3: req=node1:5701 time=13 ms
+from 4: req=node1:5701 time=7 ms
+from 5: req=node1:5701 time=12 ms
+
+
+
+
+

Node Components: listener, producer, handler, consume, and synchronizer

+
+

A Cellar node is actually a set of components, each component is dedicated to a special purpose.

+
+
+

The etc/org.apache.karaf.cellar.node.cfg configuration file is dedicated to the configuration of the local node. +It’s where you can control the status of the different components.

+
+
+
+

Synchronizers and sync policy

+
+

A synchronizer is invoked when you:

+
+
+
    +
  • +

    Cellar starts

    +
  • +
  • +

    a node joins a cluster group (see [groups] for details about cluster groups)

    +
  • +
  • +

    you explicitly call the cluster:sync command

    +
  • +
+
+
+

We have a synchronizer per resource: feature, bundle, config, obr (optional).

+
+
+

Cellar supports three sync policies:

+
+
+
    +
  • +

    cluster (default): if the node is the first one in the cluster, it pushes its local state to the cluster, else if it’s +not the first node in the cluster, the node will update its local state with the cluster one (meaning that the cluster +is the master).

    +
  • +
  • +

    node: in this case, the node is the master, it means that the cluster state will be overwritten by the node state.

    +
  • +
  • +

    disabled: in this case, it means that the synchronizer is not used at all, meaning the node or the cluster are not +updated at all (at sync time).

    +
  • +
+
+
+

You can configure the sync policy (for each resource, and each cluster group) in the etc/org.apache.karaf.cellar.groups.cfg +configuration file:

+
+
+
+
default.bundle.sync = cluster
+default.config.sync = cluster
+default.feature.sync = cluster
+default.obr.urls.sync = cluster
+
+
+
+

The cluster:sync command allows you to "force" the sync:

+
+
+
+
karaf@node1()> cluster:sync
+Synchronizing cluster group default
+        bundle: done
+        config: done
+        feature: done
+        obr.urls: No synchronizer found for obr.urls
+
+
+
+

It’s also possible to sync only a resource using:

+
+
+
    +
  • +

    -b (--bundle) for bundle

    +
  • +
  • +

    -f (--feature) for feature

    +
  • +
  • +

    -c (--config) for configuration

    +
  • +
  • +

    -o (--obr) for OBR URLs

    +
  • +
+
+
+

or a given cluster group using the -g (--group) option.

+
+
+
+

Producer, consumer, and handlers

+
+

To notify the other nodes in the cluster, Cellar produces a cluster event.

+
+
+

For that, the local node uses a producer to create and send the cluster event. +You can see the current status of the local producer using the cluster:producer-status command:

+
+
+
+
karaf@node1()> cluster:producer-status
+  | Node             | Status
+-----------------------------
+x | 172.17.42.1:5701 | ON
+
+
+
+

The cluster:producer-stop and cluster:producer-start commands allow you to stop or start the local cluster event +producer:

+
+
+
+
karaf@node1()> cluster:producer-stop
+  | Node             | Status
+-----------------------------
+x | 172.17.42.1:5701 | OFF
+karaf@node1()> cluster:producer-start
+  | Node             | Status
+-----------------------------
+x | 172.17.42.1:5701 | ON
+
+
+
+

When the producer is off, it means that the node is "isolated" from the cluster as it doesn’t send "outbound" cluster events +to the other nodes.

+
+
+

On the other hand, a node receives the cluster events on a consumer. Like for the producer, you can see and control the +consumer using dedicated command:

+
+
+
+
karaf@node1()> cluster:consumer-status
+  | Node           | Status
+---------------------------
+x | localhost:5701 | ON
+karaf@node1()> cluster:consumer-stop
+  | Node           | Status
+---------------------------
+x | localhost:5701 | OFF
+karaf@node1()> cluster:consumer-start
+  | Node           | Status
+---------------------------
+x | localhost:5701 | ON
+
+
+
+

When the consumer is off, it means that node is "isolated" from the cluster as it doesn’t receive "inbound" cluster events +from the other nodes.

+
+
+

Different cluster events are involved. For instance, we have cluster event for feature, for bundle, for configuration, for OBR, etc. +When a consumer receives a cluster event, it delegates the handling of the cluster event to a specific handler, depending of the +type of the cluster event. +You can see the different handlers and their status using the cluster:handler-status command:

+
+
+
+
karaf@node1()> cluster:handler-status
+  | Node           | Status | Event Handler
+--------------------------------------------------------------------------------------
+x | localhost:5701 | ON     | org.apache.karaf.cellar.config.ConfigurationEventHandler
+x | localhost:5701 | ON     | org.apache.karaf.cellar.bundle.BundleEventHandler
+x | localhost:5701 | ON     | org.apache.karaf.cellar.features.FeaturesEventHandler
+
+
+
+

You can stop or start a specific handler using the cluster:handler-stop and cluster:handler-start commands.

+
+
+

When a handler is stopped, it means that the node will receive the cluster event, but will not update the local resources +dealt by the handler.

+
+
+
+

Listeners

+
+

The listeners are listening for local resource change.

+
+
+

For instance, when you install a feature (with feature:install), the feature listener traps the change and broadcast this +change as a cluster event to other nodes.

+
+
+

To avoid some unexpected behaviors (especially when you stop a node), most of the listeners are switch off by default.

+
+
+

The listeners status are configured in the etc/org.apache.karaf.cellar.node.cfg configuration file.

+
+
+ + + + + +
+
Note
+
+
+

Enabling listeners is at your own risk. We encourage you to use cluster dedicated commands and MBeans to manipulate +the resources on the cluster.

+
+
+
+
+
+
+

Clustered resources

+
+

Cellar provides dedicated commands and MBeans for clustered resources.

+
+
+

Please, go into the cluster groups section for details.

+
+
+
+ + + \ No newline at end of file Added: karaf/site/production/manual/cellar/latest-3/obr.html URL: http://svn.apache.org/viewvc/karaf/site/production/manual/cellar/latest-3/obr.html?rev=1741452&view=auto ============================================================================== --- karaf/site/production/manual/cellar/latest-3/obr.html (added) +++ karaf/site/production/manual/cellar/latest-3/obr.html Thu Apr 28 15:26:24 2016 @@ -0,0 +1,629 @@ + + + + + + + +OBR Support + + + + + + +
+
+

OBR Support

+
+

Apache Karaf Cellar is able to "broadcast" OBR actions between cluster nodes of the same group.

+
+
+

Enable OBR support

+
+

To enable Cellar OBR support, the cellar-obr feature must first be installed:

+
+
+
+
karaf@root()> feature:install cellar-obr
+
+
+
+

The Cellar OBR feature will install the Karaf OBR feature which provides the OBR service (RepositoryAdmin).

+
+
+
+

Register repository URL in a cluster

+
+

The cluster:obr-add-url command registers an OBR repository URL (repository.xml) in a cluster group:

+
+
+
+
karaf@root()> cluster:obr-add-url group file:///path/to/repository.xml
+karaf@root()> cluster:obr-add-url group http://karaf.cave.host:9090/cave/repo-repository.xml
+
+
+
+

The OBR repository URLs are stored in a cluster distributed set. It allows new nodes to register the distributed URLs:

+
+
+
+
karaf@root()> cluster:obr-list-url group
+file://path/to/repository.xml
+http://karaf.cave.host:9090/cave/repo-repository.xml
+
+
+
+

When a repository is registered in the distributed OBR, Cave maintains a distributed set of bundles available on the +OBR of a cluster group:

+
+
+
+
karaf@root()> cluster:obr-list group
+Name                                                                         | Symbolic Name                                                             | Version
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+Apache Aries JMX Blueprint Core                                              | org.apache.aries.jmx.blueprint.core                                       | 1.1.1.SNAPSHOT
+Apache Karaf :: JAAS :: Command                                              | org.apache.karaf.jaas.command                                             | 2.3.6.SNAPSHOT
+Apache Aries Proxy Service                                                   | org.apache.aries.proxy.impl                                               | 1.0.3.SNAPSHOT
+Apache Karaf :: System :: Shell Commands                                     | org.apache.karaf.system.command                                           | 3.0.2.SNAPSHOT
+Apache Karaf :: JDBC :: Core                                                 | org.apache.karaf.jdbc.core                                                | 3.0.2.SNAPSHOT
+Apache Aries Example SPI Provider Bundle 1                                   | org.apache.aries.spifly.examples.provider1.bundle                         | 1.0.1.SNAPSHOT
+Apache Aries Transaction Manager                                             | org.apache.aries.transaction.manager                                      | 1.1.1.SNAPSHOT
+Apache Karaf :: Features :: Management                                       | org.apache.karaf.features.management                                      | 2.3.6.SNAPSHOT
+Apache Aries Blueprint Sample Fragment for Testing Annotation                | org.apache.aries.blueprint.sample-fragment                                | 1.0.1.SNAPSHOT
+Apache Karaf :: Management :: MBeans :: OBR                                  | org.apache.karaf.management.mbeans.obr                                    | 2.3.6.SNAPSHOT
+Apache Karaf :: JNDI :: Core                                                 | org.apache.karaf.jndi.core                                                | 2.3.6.SNAPSHOT
+Apache Karaf :: Shell :: SSH                                                 | org.apache.karaf.shell.ssh                                                | 3.0.2.SNAPSHOT
+Apache Aries Blueprint Web OSGI                                              | org.apache.aries.blueprint.webosgi                                        | 1.0.2.SNAPSHOT
+Apache Aries Blueprint JEXL evaluator                                        | org.apache.aries.blueprint.jexl.evaluator                                 | 1.0.1.SNAPSHOT
+Apache Karaf :: JDBC :: Command                                              | org.apache.karaf.jdbc.command                                             | 3.0.2.SNAPSHOT
+...
+
+
+
+

When you remove a repository URL from the distributed OBR, the bundles' distributed set is updated:

+
+
+
+
karaf@root()> cluster:obr-remove-url group http://karaf.cave.host:9090/cave/repo-repository.xml
+
+
+
+
+

Deploying bundles using the cluster OBR

+
+

You can deploy a bundle (and that bundle’s dependent bundles) using the OBR on a given cluster group:

+
+
+
+
karaf@root()> cluster:obr-deploy group bundleId
+
+
+
+

The bundle ID is the symbolic name, viewable using the cluster:obr-list command. If you don’t provide the version, the OBR deploys the latest version +available. To provide the version, use a comma after the symbolic name:

+
+
+
+
karaf@root()> cluster:obr-deploy group org.apache.servicemix.specs.java-persistence-api-1.1.1
+karaf@root()> cluster:obr-deploy group org.apache.camel.camel-jms,2.9.0.SNAPSHOT
+
+
+
+

The OBR will automatically install the bundles required to satisfy the bundle dependencies.

+
+
+

The deploy command doesn’t start bundles by default. To start the bundles just after deployment, you can use the -s option:

+
+
+
+
karaf@root()> cluster:obr-deploy -s group org.ops4j.pax.web.pax-web-runtime
+
+
+
+
+
+ + + \ No newline at end of file