Return-Path: X-Original-To: apmail-karaf-commits-archive@minotaur.apache.org Delivered-To: apmail-karaf-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 024EE18039 for ; Sun, 20 Dec 2015 16:00:09 +0000 (UTC) Received: (qmail 95761 invoked by uid 500); 20 Dec 2015 16:00:08 -0000 Delivered-To: apmail-karaf-commits-archive@karaf.apache.org Received: (qmail 95732 invoked by uid 500); 20 Dec 2015 16:00:08 -0000 Mailing-List: contact commits-help@karaf.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@karaf.apache.org Delivered-To: mailing list commits@karaf.apache.org Received: (qmail 95723 invoked by uid 99); 20 Dec 2015 16:00:08 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 20 Dec 2015 16:00:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 494551A1568 for ; Sun, 20 Dec 2015 16:00:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.248 X-Spam-Level: *** X-Spam-Status: No, score=3.248 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_BADIPHTTP=2, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-0.554, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 9Xidq6nM3VXw for ; Sun, 20 Dec 2015 15:59:49 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTP id E073C439F5 for ; Sun, 20 Dec 2015 15:59:45 +0000 (UTC) Received: from svn01-us-west.apache.org (svn.apache.org [10.41.0.6]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 5A181E0E62 for ; Sun, 20 Dec 2015 15:59:45 +0000 (UTC) Received: from svn01-us-west.apache.org (localhost [127.0.0.1]) by svn01-us-west.apache.org (ASF Mail Server at svn01-us-west.apache.org) with ESMTP id 59CA93A023D for ; Sun, 20 Dec 2015 15:59:45 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1721040 [5/12] - in /karaf/site/production/manual/cellar/latest-4: ./ architecture-guide/ user-guide/ Date: Sun, 20 Dec 2015 15:59:45 -0000 To: commits@karaf.apache.org From: jbonofre@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20151220155945.59CA93A023D@svn01-us-west.apache.org> Added: karaf/site/production/manual/cellar/latest-4/hazelcast.html URL: http://svn.apache.org/viewvc/karaf/site/production/manual/cellar/latest-4/hazelcast.html?rev=1721040&view=auto ============================================================================== --- karaf/site/production/manual/cellar/latest-4/hazelcast.html (added) +++ karaf/site/production/manual/cellar/latest-4/hazelcast.html Sun Dec 20 15:59:44 2015 @@ -0,0 +1,706 @@ + + + + + + + +Core runtime and Hazelcast + + + + + + +
+
+

Core runtime and Hazelcast

+
+
+

Cellar uses Hazelcast as cluster engine.

+
+
+

When you install the cellar feature, a hazelcast feature is automatically installed, providing the etc/hazelcast.xml +configuration file.

+
+
+

The etc/hazelcast.xml configuration file contains all the core configuration, especially: +* the Hazelcast cluster identifiers (group name and password) +* network discovery and security configuration

+
+
+

Hazelcast cluster identification

+
+

The <group/> element in the etc/hazelcast.xml defines the identification of the Hazelcast cluster:

+
+
+
+
    <group>
+        <name>cellar</name>
+        <password>pass</password>
+    </group>
+
+
+
+

All Cellar nodes have to use the same name and password (to be part of the same Hazelcast cluster).

+
+
+
+

Network

+
+

The <network/> element in the etc/hazelcast.xml contains all the network configuration.

+
+
+

First, it defines the port numbers used by Hazelcast:

+
+
+
+
        <port auto-increment="true" port-count="100">5701</port>
+        <outbound-ports>
+            <!--
+                Allowed port range when connecting to other nodes.
+                0 or * means use system provided port.
+            -->
+            <ports>0</ports>
+        </outbound-ports>
+
+
+
+

Second, it defines the mechanism used to discover the Cellar nodes: it’s the <join/> element.

+
+
+

By default, Hazelcast uses unicast.

+
+
+

You can also use multicast (enabled by default in Cellar):

+
+
+
+
            <multicast enabled="true">
+                <multicast-group>224.2.2.3</multicast-group>
+                <multicast-port>54327</multicast-port>
+            </multicast>
+            <tcp-ip enabled="false"/>
+            <aws enabled="false"/>
+
+
+
+

Instead of using multicast, you can also explicitly define the host names (or IP addresses) of the different +Cellar nodes:

+
+
+
+
            <multicast enabled="false"/>
+            <tcp-ip enabled="true"/>
+            <aws enabled="false"/>
+
+
+
+

By default, it will bind to all interfaces on the node machine. It’s possible to specify a interface:

+
+
+
+
            <multicast enabled="false"/>
+            <tcp-ip enabled="true">
+                <interface>127.0.0.1</interface>
+            </tcp-ip>
+            <aws enabled="false"/>
+
+
+
+

NB: in previous Hazelcast versions (especially the one used by Cellar 2.3.x), it was possible to have multicast and tcp-ip enabled in the same time. +In Hazelcast 3.3.x (the version currently used by Cellar 3.0.x), only one discover mechanism can be enabled at a time. Cellar uses multicast by default (tcp-ip is disabled). +If your network or network interface don’t support multicast, you have to enable tcp-ip and disable multicast.

+
+
+

You can also discover nodes located on a Amazon instance:

+
+
+
+
            <multicast enabled="false"/>
+            <tcp-ip enabled="false"/>
+            <aws enabled="true">
+                <access-key>my-access-key</access-key>
+                <secret-key>my-secret-key</secret-key>
+                <!--optional, default is us-east-1 -->
+                <region>us-west-1</region>
+                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
+                <host-header>ec2.amazonaws.com</host-header>
+                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
+                <security-group-name>hazelcast-sg</security-group-name>
+                <tag-key>type</tag-key>
+                <tag-value>hz-nodes</tag-value>
+            </aws>
+
+
+
+

Third, you can specific on which network interface the cluster is running (whatever the discovery mechanism used). By default, Hazelcast listens on all interfaces (0.0.0.0). +But you can specify an interface:

+
+
+
+
        <interfaces enabled="true">
+            <interface>10.10.1.*</interface>
+        </interfaces>
+
+
+
+

Finally, you can also enable security transport on the cluster. +Two modes are supported:

+
+
+
    +
  • +

    SSL:

    +
  • +
+
+
+
+
        <ssl enabled="true"/>
+
+
+
+
    +
  • +

    Symmetric Encryption:

    +
  • +
+
+
+
+
        <symmetric-encryption enabled="true">
+            <!--
+               encryption algorithm such as
+               DES/ECB/PKCS5Padding,
+               PBEWithMD5AndDES,
+               AES/CBC/PKCS5Padding,
+               Blowfish,
+               DESede
+            -->
+            <algorithm>PBEWithMD5AndDES</algorithm>
+            <!-- salt value to use when generating the secret key -->
+            <salt>thesalt</salt>
+            <!-- pass phrase to use when generating the secret key -->
+            <password>thepass</password>
+            <!-- iteration count to use when generating the secret key -->
+            <iteration-count>19</iteration-count>
+        </symmetric-encryption>
+
+
+
+

Cellar provides additional discovery mechanisms, See Discovery Service (jclouds and kubernetes) section for details.

+
+
+
+
+
+ + + \ No newline at end of file Added: karaf/site/production/manual/cellar/latest-4/http-balancer.html URL: http://svn.apache.org/viewvc/karaf/site/production/manual/cellar/latest-4/http-balancer.html?rev=1721040&view=auto ============================================================================== --- karaf/site/production/manual/cellar/latest-4/http-balancer.html (added) +++ karaf/site/production/manual/cellar/latest-4/http-balancer.html Sun Dec 20 15:59:44 2015 @@ -0,0 +1,699 @@ + + + + + + + +HTTP Balancer + + + + + + +
+
+

HTTP Balancer

+
+
+

Apache Karaf Cellar is able to expose servlets local to a node on the cluster. +It means that a client (browser) can use any node in the cluster, proxying the requests to the node actually +hosting the servlets.

+
+
+

Enable HTTP Balancer

+
+

To enable Cellar HTTP Balancer, you have to first install the http and http-whiteboard features:

+
+
+
+
karaf@root()> feature:install http
+karaf@root()> feature:install http-whiteboard
+
+
+
+

Now, we install the cellar-http-balancer feature, actually providing the balancer:

+
+
+
+
karaf@root()> feature:install cellar-http-balancer
+
+
+
+

Of course, you can use Cellar to spread the installation of the cellar-http-balancer feature on all nodes in the +cluster group:

+
+
+
+
karaf@root()> cluster:feature-install default cellar-http-balancer
+
+
+
+

It’s done: the Cellar HTTP Balancer is now enabled. It will expose proxy servlets on nodes.

+
+
+
+

Balancer in action

+
+

To illustrate Cellar HTTP Balancer in action, you need at least a cluster with two nodes.

+
+
+

On node1, we enable the Cellar HTTP Balancer:

+
+
+
+
karaf@node1()> feature:install http
+karaf@node1()> feature:install http-whiteboard
+karaf@node1()> feature:repo-add cellar 4.0.0
+karaf@node1()> feature:install cellar
+karaf@node1()> cluster:feature-install default cellar-http-balancer
+
+
+
+

Now, we install the webconsole on node1:

+
+
+
+
karaf@node1()> feature:install webconsole
+
+
+
+

We can see the "local" servlets provided by the webconsole feature using the http:list command:

+
+
+
+
karaf@node1()> http:list
+ID  | Servlet          | Servlet-Name    | State       | Alias               | Url
+------------------------------------------------------------------------------------------------------
+101 | KarafOsgiManager | ServletModel-2  | Undeployed  | /system/console     | [/system/console/*]
+103 | GogoPlugin       | ServletModel-7  | Deployed    | /gogo               | [/gogo/*]
+102 | FeaturesPlugin   | ServletModel-6  | Deployed    | /features           | [/features/*]
+101 | ResourceServlet  | /res            | Deployed    | /system/console/res | [/system/console/res/*]
+101 | KarafOsgiManager | ServletModel-11 | Deployed    | /system/console     | [/system/console/*]
+105 | InstancePlugin   | ServletModel-9  | Deployed    | /instance           | [/instance/*]
+
+
+
+

You can access to the webconsole using a browser on http://localhost:8181/system/console.

+
+
+

We can see that Cellar HTTP Balancer exposed the servlets to the cluster, using the cluster:http-list command:

+
+
+
+
karaf@node1()> cluster:http-list default
+Alias               | Locations
+-----------------------------------------------------------------
+/system/console/res | http://172.17.42.1:8181/system/console/res
+/gogo               | http://172.17.42.1:8181/gogo
+/instance           | http://172.17.42.1:8181/instance
+/system/console     | http://172.17.42.1:8181/system/console
+/features           | http://172.17.42.1:8181/features
+
+
+
+

On another node (node2), we install http, http-whiteboard and cellar features:

+
+
+
+
karaf@node1()> feature:install http
+karaf@node1()> feature:install http-whiteboard
+karaf@node1()> feature:repo-add cellar 4.0.0
+karaf@node1()> feature:install cellar
+
+
+
+ + + + + +
+
Warning
+
+if you run the nodes on a single machine, you have to provision etc/org.ops4j.pax.web.cfg configuration file +containing the org.osgi.service.http.port property with a port number different to 8181. +For this example, we use the following etc/org.ops4j.pax.web.cfg file: +
+
+
+
+
org.osgi.service.http.port=8041
+
+
+
+

On node1, as we installed the cellar-http-balancer using cluster:feature-install command, it’s automatically installed +when node2 joins the default cluster group.

+
+
+

We can see the HTTP endpoints available on the cluster using the cluster:http-list command:

+
+
+
+
karaf@node2()> cluster:http-list default
+Alias               | Locations
+-----------------------------------------------------------------
+/system/console/res | http://172.17.42.1:8181/system/console/res
+/gogo               | http://172.17.42.1:8181/gogo
+/instance           | http://172.17.42.1:8181/instance
+/system/console     | http://172.17.42.1:8181/system/console
+/features           | http://172.17.42.1:8181/features
+
+
+
+

If we take a look on the HTTP endpoints locally available on node2 (using http:list command), we can see the proxies +created by Cellar HTTP Balancer:

+
+
+
+
karaf@node2()> http:list
+ID  | Servlet                    | Servlet-Name   | State       | Alias               | Url
+---------------------------------------------------------------------------------------------------------------
+100 | CellarBalancerProxyServlet | ServletModel-3 | Deployed    | /gogo               | [/gogo/*]
+100 | CellarBalancerProxyServlet | ServletModel-2 | Deployed    | /system/console/res | [/system/console/res/*]
+100 | CellarBalancerProxyServlet | ServletModel-6 | Deployed    | /features           | [/features/*]
+100 | CellarBalancerProxyServlet | ServletModel-5 | Deployed    | /system/console     | [/system/console/*]
+100 | CellarBalancerProxyServlet | ServletModel-4 | Deployed    | /instance           | [/instance/*]
+
+
+
+

You can use a browser on http://localhost:8041/system/console: you will actually use the webconsole from node1, as +Cellar HTTP Balancer proxies from node2 to node1.

+
+
+

Cellar HTTP Balancer randomly chooses one endpoint providing the HTTP endpoint.

+
+
+
+
+
+ + + \ No newline at end of file