From scm-return-46769-apmail-geronimo-scm-archive=geronimo.apache.org@geronimo.apache.org Fri Jul 15 03:53:38 2011 Return-Path: X-Original-To: apmail-geronimo-scm-archive@www.apache.org Delivered-To: apmail-geronimo-scm-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 18D176233 for ; Fri, 15 Jul 2011 03:53:38 +0000 (UTC) Received: (qmail 7706 invoked by uid 500); 15 Jul 2011 03:53:36 -0000 Delivered-To: apmail-geronimo-scm-archive@geronimo.apache.org Received: (qmail 7563 invoked by uid 500); 15 Jul 2011 03:53:30 -0000 Mailing-List: contact scm-help@geronimo.apache.org; run by ezmlm Precedence: bulk list-help: list-unsubscribe: List-Post: Reply-To: dev@geronimo.apache.org List-Id: Delivered-To: mailing list scm@geronimo.apache.org Received: (qmail 7555 invoked by uid 99); 15 Jul 2011 03:53:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 15 Jul 2011 03:53:28 +0000 X-ASF-Spam-Status: No, hits=-1994.3 required=5.0 tests=ALL_TRUSTED,HTML_FONT_LOW_CONTRAST,HTML_MESSAGE,MIME_HTML_ONLY X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 15 Jul 2011 03:53:24 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id p6F3r071029087 for ; Fri, 15 Jul 2011 03:53:02 GMT Date: Thu, 14 Jul 2011 23:53:00 -0400 (EDT) From: confluence@apache.org To: scm@geronimo.apache.org Message-ID: <17881368.7932.1310701980063.JavaMail.confluence@thor> Subject: [CONF] Apache Geronimo v3.0 > JMS clustering in Geronimo MIME-Version: 1.0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Auto-Submitted: auto-generated X-Virus-Checked: Checked by ClamAV on apache.org

JMS clustering in Geronimo

Page edited by Runhua Chi


Changes (34)

=20 =20
=20 =20
{scrollbar}

{excerpt}JMS clustering i= n Geronimo is handled by ActiveMQ component directly.{excerpt} By updating the {{activemq.xml}} file under the {{/var/= activemq/conf}} directory, using blueprint services.{excerpt} you can = configure brokers to be clustered, and a JMS request can failover to anothe= r broker if the JMS broker goes down using [Master Slave|http://activemq.ap= ache.org/masterslave.html|Master Slave] functionality.

h1. Prerequisite

Make sure the system module {{org/apache/ge= ronimo/activemq-ra/3.0/car}} is loaded during server startup. And then upda= te {{config-substitutions.properties}} file under the {{/var/config}} direc= tory to specify IP address or host name for each ActiveMQ node when the ser= ver is stopped.
{code:xml|title=3Dconfig-substitutions.properties}
= ActiveMQHostname=3Dhostname/IP
{code}


h1. JMS clustering = scenarios

There are different kinds of = Master/Slave configurations available according to ActiveMQ documentation: =
{toc:maxlevel=3D1}

In Geronimo server, all those configurations are handled using bluepr= int services. You need to update the content of {{activemq.xml}} within {{a= ctivemq-broker-blueprint-3.0.car}} under {{/repository/org/apache/geronimo/= configs/activemq-broker-blueprint/3.0}} directory in accordance with the sc= enario you choose. The easier way would be to unzip {{activemq-broker-bluep= rint-3.0.car}} and repack them after modification.


See the following configurati= on for each scenario in Geronimo

h12. Pure Master/Slav= e
With this scenario, you must = specify the master and slave node explicitly and manually restart a failed = master.
h23. Master node
=
On the master node, you just = need to specify that the current node is a master by using the _brokerName_= attribute as follows.
{code:title=3Dactivemq.xml} =
{code:xml|title=3Dactivemq.xml}
...
<broker xmlns=3D"htt= p://activemq.apache.org/schema/core"
brokerName=3D"m= aster"
useJmx=3D"false"
deleteAllM= essagesOnStartup=3D"true"
tmpDataDirectory=3D"$= {activemq.data}/tmp_storage"
useShutdownHook=3D"fals= e" start=3D"false">
<cm:property name=3D"serverHostname" value=3D&quo= t;masterIP"/>
...
{code}
h23. Slave node
<= /td>
Because each master has only = one slave in the Pure Master/Slave scenario, the slave node must know the U= RI of the master node and also be tagged as a slave node by using the _brok= erName_ attribute.
{code:title=3Dactivemq.xml} =
{code:xml|title=3Dactivemq.xml}
...
<broker xmlns=3D"ht= tp://activemq.apache.org/schema/core"
brokerName=3D"= ;slave" deleteAllMessagesOnStartup=3D"true"
use= Jmx=3D"false"
masterConnectorURI=3D"tcp://mast= erHostname:${${ActiveMQPort}}"
tmpDataDirectory=3D"$= {activemq.data}/tmp_storage"
useShutdownHook=3D"fal= se" start=3D"false">
<cm:property name=3D"serverHostname" value=3D&quo= t;slaveIP"/>
...
{code}

h23. Client connectio= n
JMS clients use the failover:= // protocol to locate brokers in a cluster. See the following example:
= {panel}
failover://(tcp://masterhost:61616,tcp://slavehost:61616)?randomize= =3Dfalse failover://(tcp://masterIP:61616,tcp://slaveIP:61616)?randomize=3Dfa= lse
{panel}


h12. Shared File syst= em
In this scenario, you must us= e a shared file system to provide high availability of brokers and automati= c discovery of master/slave nodes. The shared folder must allow different s= laves to have _write_ permission.
h23. Each node
On each node, configure a sha= red directory as the place where brokers are using the {{persistenceAdapter= }} element as follows:
{code:title=3Dactivemq.xml}
...
<persistenceAdapter>= ;
<amqPersistenceAdapter directory=3D"/sharedFileSystem/s= haredBrokerData"/>
</persistenceAdapter>
<cm:property name=3D"serverHostname" value=3D&quo= t;broker1"/>
...
<amq:persistenceAdapter>
<amq:amqPers= istenceAdapter directory=3D"/sharedFileSystem/sharedBrokerData"/&= gt;
</amq:persistenceAdapter>
...
{code}
Note that:
* F= or the shared file system on a Linux node, you must mount the shared direct= ory first.
* For the shared file system on a Windows node, you can use = the path such as {{//ipAddress/sharedFolder}} in the configuration.
* On each node, _broker1_ should be replaced with the exact IP addres= s of current node.

h23. Client connectio= n
JMS clients use the failover:= // protocol to locate brokers in a cluster. See the following example:
= {panel}
...
{panel}

h12. JDBC Master Slav= e
In this scenario, you must us= e a shared database as the persistence enginee and automatic recovery.
=
h23. Each node
On each node, configure a= shared database pool by using the {{jdbcPersistenceAdapter}} element as fo= llows. We use the embeded Derby a remote Orac= le database server as an example:
{code:title=3Dactivemq.xml} <= br>...
<persistenceAdapter&g= t;
<amqPersistenceAdapter dataSource=3D"Shared-DS"/&= gt;
</persistenceAdapter>
<cm:property name=3D"serverHostname" value=3D"= broker1"/>
...
<bean id=3D"Shar= ed-ds" class=3D"org.apache.derby.jdbc.EmbeddedDataSource">= ;
<property name=3D"databaseName" value=3D"Shar= ed_db"/>
<property name=3D"createDatabase" v= alue=3D"create"/>
<amq:persistenceAdapter>
<amq:jdbcPersiste= nceAdapter dataSource=3D"#oracle-ds"/>
</amq:persi= stenceAdapter>
...
<bean id=3D"oracle-ds" clas= s=3D"org.apache.commons.dbcp.BasicDataSource" destroy-method=3D&q= uot;close">
<property name=3D"driverClassName&quo= t; value=3D"oracle.jdbc.driver.OracleDriver"/>
<p= roperty name=3D"url" value=3D"jdbc:oracle:thin:@dbServer:152= 1:AMQDB"/>
<property name=3D"username" value= =3D"scott"/>
<property name=3D"password"= ; value=3D"tiger"/>
<property name=3D"maxAct= ive" value=3D"200"/>
<property name=3D"= poolPreparedStatements" value=3D"true"/>
</bean>
...
{code}
Note that:
* For the database server, _dbServer_ should be replac= ed with actual IP address of the database server.
* On each node, _brok= er1_ should be replaced with the exact IP address of current node.

h23. Client connectio= n
JMS clients use the failover:= // protocol to locate brokers in a cluster. See the following example: {panel}
...

Full Content

JMS clustering in Geronimo is handled by ActiveMQ component using bluepr= int services. you can configure brokers to be clustered, and a JMS request = can failover to another broker if the JMS broker goes down using Master Slave functionality.

Prerequisite

Make sure the system module org/apache/geronimo/activemq-ra/3.0/car<= /tt> is loaded during server startup. And then update config-substituti= ons.properties file under the /var/config directory to specif= y IP address or host name for each ActiveMQ node when the server is stopped= .

config-substitutio= ns.properties
ActiveMQHostname=3Dhostname/IP

JMS clus= tering scenarios

There are different kinds of Master/Slave configurations available accor= ding to ActiveMQ documentation:

In Geronimo server, all those configurations are handled using blueprint= services. You need to update the content of activemq.xml within <= tt>activemq-broker-blueprint-3.0.car under /repository/org/apache/= geronimo/configs/activemq-broker-blueprint/3.0 directory in accordance= with the scenario you choose. The easier way would be to unzip activem= q-broker-blueprint-3.0.car and repack them after modification.

See the following configuration for each scenario in Geronimo

Pure Master/= Slave

With this scenario, you must specify the master and slave node explicitl= y and manually restart a failed master.

Master node

On the master node, you just need to specify that the current node is a = master by using the brokerName attribute as follows.

activemq.xml
...
       <cm:property name=3D"serverHostname" value=3D"masterIP=
"/>
...

Slave node

Because each master has only one slave in the Pure Master/Slave scenario= , the slave node must know the URI of the master node and also be tagged as= a slave node by using the brokerName attribute.

activemq.xml
...
       <cm:property name=3D"serverHostname" value=3D"slaveIP"=
/>
...

Client connect= ion

JMS clients use the failover:// protocol to locate brokers in a cluster.= See the following example:

failover://(tcp://masterIP:61616,tcp://slaveIP:61616)?randomize=3Dfalse<= /p>

Shared File sy= stem

In this scenario, you must use a shared file system to provide high avai= lability of brokers and automatic discovery of master/slave nodes. The shar= ed folder must allow different slaves to have write permission.

Each node

On each node, configure a shared directory as the place where brokers ar= e using the persistenceAdapter element as follows:

activemq.xml
...
       <cm:property name=3D"serverHostname" value=3D"broker1"/>
...
       <amq:persistenceAdapter>
            <amq:amqPersistenceAdapter directory=3D"/sharedFileSystem/sharedBrokerData"/>
       </amq:persistenceAdapter>
...

Note that:

    =09
  • For the shared file system on a Linux node, you must mount the share= d directory first.
  • =09
  • For the shared file system on a Windows node, you can use the path s= uch as //ipAddress/sharedFolder in the configuration.
  • =09
  • On each node, broker1 should be replaced with the exact IP = address of current node.

Client connect= ion

JMS clients use the failover:// protocol to locate brokers in a cluster.= See the following example:

failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)= ?randomize=3Dfalse

JDBC Master Sla= ve

In this scenario, you must use a shared database as the persistence engi= nee and automatic recovery.

Each node

On each node, configure a shared database pool by using the jdbcPers= istenceAdapter element as follows. We use a remote Oracle database ser= ver as an example:

activemq.xml
...
     <cm:property name=3D"serverHostname" value=3D"broker1"/>
...
     <amq:persistenceAdapter>
         <amq:jdbcPersistenceAdapter dataSource=3D"#oracle-ds"/>
     </amq:persistenceAdapter>
...
     <bean id=3D"oracle-ds" class=3D"org.apache.commons.dbcp.BasicDataSource" d=
estroy-method=3D"close">
      <property name=3D"driverClassName" value=3D"oracle.jdbc.driver.OracleDriver"/>
      <property name=3D"url" value=3D<=
span class=3D"code-quote">"jdbc:oracle:thin:@dbServer:1521:AMQDB"/&g=
t;
      <property name=3D"username" valu=
e=3D"scott"/>
      <property name=3D"password" valu=
e=3D"tiger"/>
      <property name=3D"maxActive" val=
ue=3D"200"/>
      <property name=3D"poolPreparedStatement=
s" value=3D"=
true"/>
    </bean>
...

Note that:

    =09
  • For the database server, dbServer should be replaced with a= ctual IP address of the database server.
  • =09
  • On each node, broker1 should be replaced with the exact IP = address of current node.

Client connect= ion

JMS clients use the failover:// protocol to locate brokers in a cluster.= See the following example:

failover://(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)= ?randomize=3Dfalse