Modified: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/QpidJavaBroker-ManagementTools.html URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/QpidJavaBroker-ManagementTools.html?rev=1368244&r1=1368243&r2=1368244&view=diff ============================================================================== --- qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/QpidJavaBroker-ManagementTools.html (original) +++ qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/QpidJavaBroker-ManagementTools.html Wed Aug 1 20:54:46 2012 @@ -1,101 +1,73 @@ -Chapter 4. Management Tools

Chapter 4. Management Tools

Table of Contents

4.1. - MessageStore Tool -
4.1.1. - MessageStore Tool -
4.2. +Chapter 4. Management Tools

Chapter 4. Management Tools

4.1.  - MessageStore Tool -

4.1.1.  - MessageStore Tool -

- We have a number of implementations of the Qpid MessageStore - interface. This tool allows the interrogation of these stores - while the broker is offline. -

4.1.1.1.  - MessageStore - Implementations -

4.1.1.2.  - Introduction -

- Each of the MessageStore implementations provide different back - end storage for their messages and so would need a different tool - to be able to interrogate their contents at the back end. +

4.1.  + Qpid Java Broker Management CLI +

4.1.1.  + How to + build Apache Qpid CLI +

4.1.1.1.  + Build + Instructions - General +

+ At the very beginning please build Apache Qpid by refering this + installation guide from here ???.

- What this tool does is to utilise the Java broker code base to - access the contents of the storage providing the user with a - consistent means to inspect the storage contents in broker - memory. The tool allows the current messages in the store to be - inspected and copied/moved between queues. The tool uses the - message instance in memory for all its access paths, but changes - made will be reflected in the physical store (if one exists). -

4.1.1.3.  - Usage + After successfully build Apache Qpid you'll be able to start + Apache Qpid Java broker,then only you are in a position to use + Qpid CLI. +

4.1.1.2.  + Check + out the Source

- The tools-distribution currently includes a unix shell command - 'msTool.sh' this script will launch the java tool. + First check out the source from subversion repository. Please + visit the following link for more information about different + versions of Qpid CLI.

- The tool loads $QPID_HOME/etc/config.xml by default. If an - alternative broker configuration is required this should be - provided on the command line as would be done for the broker. -

-msTool.sh -c <path to different config.xml>
-

- On startup the user is present with a command prompt -

-$ msTool.sh
-MessageStoreTool - for examining Persistent Qpid Broker MessageStore instances
-bdb$ 
-

4.1.1.4.  - Available - Commands + ??? +

4.1.1.3.  + Prerequisites

- The available commands in the tool can be seen through the use of - the 'help' command. + For the broker code you need JDK 1.5.0_15 or later. You should + set JAVA_HOME and include the bin directory in your PATH. +

+ Check it's ok by executing java -v ! +

4.1.1.4.  + Building + Apache Qpid CLI +

+ This project is currently having only an ant build system.Please + install ant build system before trying to install Qpid CLI. +

4.1.1.5.  + Compiling +

+ To compile the source please run following command

-bdb$ help
-+----------------------------------------------------------------+
-|                       Available Commands                       |
-+----------------------------------------------------------------+
-| Command | Description                                          |
-+----------------------------------------------------------------+
-| quit    | Quit the tool.                                       |
-| list    | list available items.                                |
-| dump    | Dump selected message content. Default: show=content |
-| load    | Loads specified broker configuration file.           |
-| clear   | Clears any selection.                                |
-| show    | Shows the messages headers.                          |
-| select  | Perform a selection.                                 |
-| help    | Provides detailed help on commands.                  |
-+----------------------------------------------------------------+
-bdb$
+ant compile 
 

- A brief description is displayed and further usage information is - shown with 'help <command>' + To compile the test source run the following command

-bdb$ help list
-list availble items.
-Usage:list queues [<exchange>] | exchanges | bindings [<exchange>] | all
-bdb$
-

4.1.1.5.  - Future Work -

- Currently the tool only works whilst the broker is offline i.e. - it is up, but not accepting AMQP connections. This requires a - stop/start of the broker. If this functionality was incorporated - into the broker then a telnet functionality could be provided - allowing online management. -

Added: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html?rev=1368244&view=auto ============================================================================== --- qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html (added) +++ qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html Wed Aug 1 20:54:46 2012 @@ -0,0 +1,438 @@ +1.6. High Availability

1.6. High Availability

1.6.1. General Introduction

The term High Availability (HA) usually refers to having a number of instances of a service such as a Message Broker + available so that should a service unexpectedly fail, or requires to be shutdown for maintenance, users may quickly connect + to another instance and continue their work with minimal interuption. HA is one way to make a overall system more resilient + by eliminating a single point of failure from a system.

HA offerings are usually categorised as Active/Active or Active/Passive. + An Active/Active system is one where all nodes within the cluster are usuaully available for use by clients all of the time. In an + Active/Passive system, one only node within the cluster is available for use by clients at any one time, whilst the others are in + some kind of standby state, awaiting to quickly step-in in the event the active node becomes unavailable. +

1.6.2. HA offerings of the Java Broker

The Java Broker's HA offering became available at release 0.18. HA is provided by way of the HA + features built into the Java Edition of the Berkley Database (BDB JE) and as such + is currently only available to Java Broker users who use the optional BDB JE based persistence store. This + optional store requires the use of BDB JE which is licensed under the Sleepycat Licence, which is + not compatible with the Apache Licence and thus BDB JE is not distributed with Qpid. Users who elect to use this optional store for + the broker have to provide this dependency.

HA in the Java Broker provides an Active/Passive mode of operation with Virtual hosts being + the unit of replication. The Active node (referred to as the Master) accepts all work from all the clients. + The Passive nodes (referred to as Replicas) are unavailable for work: the only task they must perform is + to remain in synch with the Master node by consuming a replication stream containing all data and state.

If the Master node fails, a Replica node is elected to become the new Master node. All clients automatically failover + [1] to the new Master and continue their work.

The Java Broker HA solution is incompatible with the HA solution offered by the CPP Broker. It is not possible to co-locate Java and CPP + Brokers within the same cluster.

HA is not currently available for those using the the Derby Store or Memory + Message Store.

1.6.3. Two Node Cluster

1.6.3.1. Overview

In this HA solution, a cluster is formed with two nodes. one node serves as + master and the other is a replica. +

All data and state required for the operation of the virtual host is automatically sent from the + master to the replica. This is called the replication stream. The master virtual host confirms each + message is on the replica before the client transaction completes. The exact way the client awaits + for the master and replica is gorverned by the durability + configuration, which is discussed later. In this way, the replica remains ready to take over the + role of the master if the master becomes unavailable. +

It is important to note that there is an inherent limitation of two node clusters is that + the replica node cannot make itself master automatically in the event of master failure. This + is because the replica has no way to distinguish between a network partition (with potentially + the master still alive on the other side of the partition) and the case of genuine master failure. + (If the replica were to elect itself as master, the cluster would run the risk of a + split-brain scenario). + In the event of a master failure, a third party must designate the replica as primary. This process + is described in more detail later. +

Clients connect to the cluster using a failover url. + This allows the client to maintain a connection to the master in a way that is transparent + to the client application.

1.6.3.2. Depictions of cluster operation

In this section, the operation of the cluster is depicted through a series of figures + supported by explanatory text.

Figure 1.1. Key for figures

Key to figures

Normal Operation

The figure below illustrates normal operation. Clients connecting to the cluster by way + of the failover URL achieve a connection to the master. As clients perform work (message + production, consumption, queue creation etc), the master additionally sends this data to the + replica over the network.

Figure 1.2. Normal operation of a two-node cluster

Normal operation

Master Failure and Recovery

The figure below illustrates a sequence of events whereby the master suffers a failure + and the replica is made the master to allow the clients to continue to work. Later the + old master is repaired and comes back on-line in replica role.

The item numbers in this list apply to the numbered boxes in the figure below.

  1. System operating normally

  2. Master suffers a failure and disconnects all clients. Replica realises that it is no + longer in contact with master. Clients begin to try to reconnect to the cluster, although these + connection attempts will fail at this point.

  3. A third-party (an operator, a script or a combination of the two) verifies that the master has truely + failed and is no longer running. If it has truely failed, the decision is made + to designate the replica as primary, allowing it to assume the role of master despite the other node being down. + This primary designation is performed using JMX.

  4. Client connections to the new master succeed and the service is restored + , albeit without a replica.

  5. The old master is repaired and brought back on-line. It automatically rejoins the cluster + in the replica role.

Figure 1.3. Failure of master and recovery sequence

Failure of master and subsequent recovery sequence

Replica Failure and Recovery

The figure that follows illustrates a sequence of events whereby the replica suffers a failure + leaving the master to continue processing alone. Later the replica is repaired and is restarted. + It rejoins the cluster so that it is once again ready to take over in the event of master failure.

The behavior of the replica failure case is governed by the designatedPrimary + configuration item. If set true on the master, the master will continue to operate solo without outside + intervention when the replica fails. If false, a third-party must designate the master as primary in order + for it to continue solo.

The item numbers in this list apply to the numbered boxes in the figure below. This example assumes + that designatedPrimary is true on the original master node.

  1. System operating normally

  2. Replica suffers a failure. Master realises that replica longer in contact but as + designatedPrimary is true, master continues processing solo and thus client + connections are uninterrupted by the loss of the replica. System continues operating normally, albeit + with a single node.

  3. Replica is repaired.

  4. After catching up with missed work, replica is once again ready to take over in the event of master failure.

Figure 1.4. Failure of replica and subsequent recovery sequence

Failure of replica and subsequent recovery sequence

Network Partition and Recovery

The figure below illustrates the sequence of events that would occur if the network between + master and replica were to suffer a partition, and the nodes were out of contact with one and other.

As with Replica Failure and Recovery, the + behaviour is governed by the designatedPrimary. + Only if designatedPrimary is true on the master, will the master continue solo.

The item numbers in this list apply to the numbered boxes in the figure below. This example assumes + that designatedPrimary is true on the original master node.

  1. System operating normally

  2. Network suffers a failure. Master realises that replica longer in contact but as + designatedPrimary is true, master continues processing solo and thus client + connections are uninterrupted by the network partition between master and replica.

  3. Network is repaired.

  4. After catching up with missed work, replica is once again ready to take over in the event of master failure. + System operating normally again.

Figure 1.5. Partition of the network separating master and replica

Network Partition and Recovery

Split Brain

A split-brain + is a situation where the two node cluster has two masters. BDB normally strives to prevent + this situation arising by preventing two nodes in a cluster being master at the same time. + However, if the network suffers a partition, and the third-party intervenes incorrectly + and makes the replica a second master a split-brain will be formed and both masters will + proceed to perform work independently of one and other.

There is no automatic recovery from a split-brain.

Manual intervention will be required to choose which store will be retained as master + and which will be discarded. Manual intervention will be required to identify and repeat the + lost business transactions.

The item numbers in this list apply to the numbered boxes in the figure below.

  1. System operating normally

  2. Network suffers a failure. Master realises that replica longer in contact but as + designatedPrimary is true, master continues processing solo. Client + connections are uninterrupted by the network partition.

    A third-party erroneously designates the replica as primary while the + original master continues running (now solo).

  3. As the nodes cannot see one and other, both behave as masters. Clients may perform work against + both master nodes.

Figure 1.6. Split Brain

Split Brain

1.6.4. Multi Node Cluster

Multi node clusters, that is clusters where the number of nodes is three or more, are not yet + ready for use.

1.6.5. Configuring a Virtual Host to be a node

To configure a virtualhost as a cluster node, configure the virtualhost.xml in the following manner:

+

+<virtualhost>
+  <name>myhost</name>
+  <myvhost>
+    <store>
+      <class>org.apache.qpid.server.store.berkeleydb.BDBHAMessageStore</class>
+      <environment-path>${work}/bdbhastore</environment-path>
+      <highAvailability>
+        <groupName>myclustername</groupName>
+        <nodeName>mynode1</nodeName>
+        <nodeHostPort>node1host:port</nodeHostPort>
+        <helperHostPort>node1host:port</helperHostPort>
+        <durability>NO_SYNC\,NO_SYNC\,SIMPLE_MAJORITY</durability>
+        <coalescingSync>true|false</coalescingSync>
+        <designatedPrimary>true|false</designatedPrimary>
+      </highAvailability>
+    </store>
+    ...
+ </myvhost>
+</virtualhost>

+

The groupName is the name of logical name of the cluster. All nodes within the + cluster must use the same groupName in order to be considered part of the cluster.

The nodeName is the logical name of the node. All nodes within the cluster must have a + unique name. It is recommended that the node name should be chosen from a different nomenclature from that of + the servers on which they are hosted, in case the need arises to move node to a new server in the future.

The nodeHostPort is the hostname and port number used by this node to communicate with the + the other nodes in the cluster. For the hostname, an IP address, hostname or fully qualified hostname may be used. + For the port number, any free port can be used. It is important that this address is stable over time, as BDB + records and uses this address internally.

The helperHostPort is the hostname and port number that new nodes use to discover other + nodes within the cluster when they are newly introduced to the cluster. When configuring the first node, set the + helperHostPort to its own nodeHostPort. For the second and subsequent nodes, + set their helperHostPort to that of the first node.

durability controls the durability + guarantees made by the cluster. It is important that all nodes use the same value for this property. The default value is + NO_SYNC\,NO_SYNC\,SIMPLE_MAJORITY. Owing to the internal use of Apache Commons Config, it is currently necessary + to escape the commas within the durability string.

coalescingSync controls the coalescing-sync + mode of Qpid. It is important that all nodes use the same value. If omitted, it defaults to true.

The designatedPrimary is applicable only to the two-node + case. It governs the behaviour of a node when the other node fails or becomes uncontactable. If true, + the node will be designated as primary at startup and will be able to continue operating as a single node master. + If false, the node will transition to an unavailable state until a third-party manually designates the node as + primary or the other node is restored. It is suggested that the node that normally fulfils the role of master is + set true in config file and the node that is normally replica is set false. Be aware that setting both nodes to + true will lead to a failure to start up, as both cannot be designated at the point of contact. Designating both + nodes as primary at runtime (using the JMX interface) will lead to a split-brain + in the case of network partition and must be avoided.

Note

Usage of domain names in helperHostPort and nodeHostPort is more preferebale + over IP addresses due to the tendency of more frequent changes of the last over the former. + If server IP address changes but domain name remains the same the HA cluster can continue working as normal + in case when domain names are used in cluster configuration. In case when IP addresses are used and they are changed with the time + than Qpid JMX API for HA can be used to change the addresses or remove the nodes from the cluster.

1.6.5.1. Passing BDB environment and replication configuration options

It is possible to pass BDB + environment and + replication configuration options from the virtualhost.xml. Environment configuration options are passed using + the envConfig element, and replication config using repConfig.

For example, to override the BDB environment configuration options je.cleaner.threads and + je.txn.timeout

+         ...
+      </highAvailability>
+      <envConfig>
+        <name>je.cleaner.threads</name>
+        <value>2</value>
+      </envConfig>
+      <envConfig>
+        <name>je.txn.timeout</name>
+        <value>15 min</value>
+      </envConfig>
+      ...
+    </store>

And to override the BDB replication configuration options je.rep.insufficientReplicasTimeout.

+         ...
+      </highAvailability>
+      ...
+      <repConfig>
+        <name>je.rep.insufficientReplicasTimeout</name>
+        <value>2</value>
+      </envConfig>
+      <envConfig>
+        <name>je.txn.timeout</name>
+        <value>10 s</value>
+      </envConfig>
+      ...
+    </store>

1.6.6. Durability Guarantees

The term durability is used to mean that once a + transaction is committed, it remains committed regardless of subsequent failures. A highly durable system is one where + loss of a committed transaction is extermely unlikely, whereas with a less durable system loss of a transaction is likely + in a greater number of scenarios. Typically, the more highly durable a system the slower and more costly it will be.

Qpid exposes the all the + durability controls + offered by by BDB JE JA and a Qpid specific optimisation called coalescing-sync which defaults + to enabled.

1.6.6.1. BDB Durability Controls

BDB expresses durability as a triplet with the following form:

<master sync policy>,<replica sync policy>,<replica acknowledgement policy>

The sync polices controls whether the thread performing the committing thread awaits the successful completion of the + write, or the write and sync before continuing. The master sync policy and replica sync policy need not be the same.

For master and replic sync policies, the available values are: + SYNC, + WRITE_NO_SYNC, + NO_SYNC. SYNC + is offers the highest durability whereas NO_SYNC the lowest.

Note: the combination of a master sync policy of SYNC and coalescing-sync + true would result in poor performance with no corresponding increase in durability guarantee. It cannot not be used.

The acknowledgement policy defines whether when a master commits a transaction, it also awaits for the replica(s) to + commit the same transaction before continuing. For the two-node case, ALL and SIMPLE_MAJORITY are equal.

For acknowledgement policy, the available value are: + ALL, + SIMPLE_MAJORITY + NONE.

1.6.6.2. Coalescing-sync

If enabled (the default) Qpid works to reduce the number of separate + file-system sync operations + performed by the master on the underlying storage device thus improving performance. It does + this coalescing separate sync operations arising from the different client commits operations occuring at approximately the same time. + It does this in such a manner not to reduce the ACID guarantees of the system.

Coalescing-sync has no effect on the behaviour of the replicas.

1.6.6.3. Default

The default durability guarantee is NO_SYNC, NO_SYNC, SIMPLE_MAJORITY with coalescing-sync enabled. The effect + of this combination is described in the table below. It offers a good compromise between durability guarantee and performance + with writes being guaranteed on the master and the additional guarantee that a majority of replicas have received the + transaction.

1.6.6.4. Examples

Here are some examples illustrating the effects of the durability and coalescing-sync settings.

+

Table 1.2. Effect of different durability guarantees

 DurabilityCoalescing-syncDescription
1NO_SYNC, NO_SYNC, SIMPLE_MAJORITYtrueBefore the commit returns to the client, the transaction will be written/sync'd to the Master's disk (effect of + coalescing-sync) and a majority of the replica(s) will have acknowledged the receipt + of the transaction. The replicas will write and sync the transaction to their disk at a point in the future governed by + ReplicationMutableConfig#LOG_FLUSH_INTERVAL. +
2NO_SYNC, WRITE_NO_SYNC, SIMPLE_MAJORITYtrueBefore the commit returns to the client, the transaction will be written/sync'd to the Master's disk (effect of + coalescing-sync and a majority of the replica(s) will have acknowledged the write of + the transaction to their disk. The replicas will sync the transaction to disk at a point in the future with an upper bound governed by + ReplicationMutableConfig#LOG_FLUSH_INTERVAL.
3NO_SYNC, NO_SYNC, NONEfalseAfter the commit returns to the client, the transaction is neither guaranteed to be written to the disk of the master + nor received by any of the replicas. The master and replicas will write and sync the transaction to their disk at a point + in the future with an upper bound governed by ReplicationMutableConfig#LOG_FLUSH_INTERVAL. This offers the weakest durability guarantee.


+

1.6.7. Client failover configuration

The details about format of Qpid connection URLs can be found at section + Connection URLs + of book Programming In Apache Qpid.

The failover policy option in the connection URL for the HA Cluster should be set to roundrobin. + The Master broker should be put into a first place in brokerlist URL option. + The recommended value for connectdelay option in broker URL should be set to + the value greater than 1000 milliseconds. If it is desired that clients re-connect automatically after a + master to replica failure, cyclecount should be tuned so that the retry period is longer than + the expected length of time to perform the failover.

Example 1.1. Example of connection URL for the HA Cluster

+amqp://guest:guest@clientid/test?brokerlist='tcp://localhost:5672?connectdelay='2000'&retries='3';tcp://localhost:5671?connectdelay='2000'&retries='3';tcp://localhost:5673?connectdelay='2000'&retries='3''&failover='roundrobin?cyclecount='30'' +

1.6.8. Qpid JMX API for HA

Qpid exposes the BDB HA store information via its JMX interface and provides APIs to remove a Node from + the group, update a Node IP address, and assign a Node as the designated primary.

An instance of the BDBHAMessageStore MBean is instantiated by the broker for the each virtualhost using the HA store.

The reference to this MBean can be obtained via JMX API using an ObjectName like org.apache.qpid:type=BDBHAMessageStore,name=<virtualhost name> + where <virtualhost name> is the name of a specific virtualhost on the broker.

Mbean BDBHAMessageStore attributes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameTypeAccessibilityDescription
GroupNameStringRead onlyName identifying the group
NodeNameStringRead onlyUnique name identifying the node within the group
NodeHostPortStringRead onlyHost/port used to replicate data between this node and others in the group
HelperHostPortStringRead onlyHost/port used to allow a new node to discover other group members
NodeStateStringRead onlyCurrent state of the node
ReplicationPolicyStringRead onlyNode replication durability
DesignatedPrimarybooleanRead/WriteDesignated primary flag. Applicable to the two node case.
CoalescingSyncbooleanRead onlyCoalescing sync flag. Applicable to the master sync policies NO_SYNC and WRITE_NO_SYNC only.
getAllNodesInGroupTabularDataRead onlyGet all nodes within the group, regardless of whether currently attached or not
Mbean BDBHAMessageStore operations + + + + + + + + + + + + + + +
OperationParametersReturnsDescription
removeNodeFromGroup +

nodeName, name of node, string

+
voidRemove an existing node from the group
updateAddress +

nodeName, name of node, string

    newHostName, new host name, string

      newPort, new port number, int

          +
          voidUpdate the address of another node. The node must be in a STOPPED state.

          Figure 1.7. BDBHAMessageStore view from jconsole.

          BDBHAMessageStore view from jconsole.

          Example 1.2. Example of java code to get the node state value

          +Map<String, Object> environment = new HashMap<String, Object>();
          +
          +// credentials: user name and password
          +environment.put(JMXConnector.CREDENTIALS, new String[] {"admin","admin"});
          +JMXServiceURL url =  new JMXServiceURL("service:jmx:rmi:///jndi/rmi://localhost:9001/jmxrmi");
          +JMXConnector jmxConnector = JMXConnectorFactory.connect(url, environment);
          +MBeanServerConnection mbsc =  jmxConnector.getMBeanServerConnection();
          +
          +ObjectName queueObjectName = new ObjectName("org.apache.qpid:type=BDBHAMessageStore,name=test");
          +String state = (String)mbsc.getAttribute(queueObjectName, "NodeState");
          +
          +System.out.println("Node state:" + state);
          +        

          Example system output:

          Node state:MASTER

          1.6.9. Monitoring cluster

          In order to discover potential issues with HA Cluster early, all nodes in the Cluster should be monitored on regular basis + using the following techniques:

          • Broker log files scrapping for WARN or ERROR entries and operational log entries like:

            • MST-1007 : Store Passivated. It can indicate that Master virtual host has gone down.

            • MST-1006 : Recovery Complete. It can indicate that a former Replica virtual host is up and became the Master.

          • Disk space usage and system load using system tools.

          • Berkeley HA node status using DbPing utility.

            Ex ample 1.3. Using DbPing utility for monitoring HA nodes.

            +java -jar je-5.0.48.jar DbPing -groupName TestClusterGroup -nodeName Node-5001 -nodeHost localhost:5001 -socketTimeout 10000 +
            +Current state of node: Node-5001 from group: TestClusterGroup
            +  Current state: MASTER
            +  Current master: Node-5001
            +  Current JE version: 5.0.48
            +  Current log version: 8
            +  Current transaction end (abort or commit) VLSN: 165
            +  Current master transaction end (abort or commit) VLSN: 0
            +  Current active feeders on node: 0
            +  Current system load average: 0.35
            +

            In the example above DbPing utility requested status of Cluster node with name + Node-5001 from replication group TestClusterGroup running on host localhost:5001. + The state of the node was reported into a system output. +

          • Using Qpid broker JMX interfaces.

            Mbean BDBHAMessageStore can be used to request the following node information:

            • NodeState indicates whether node is a Master or Replica.

            • Durability replication durability.

            • DesignatedPrimary indicates whether Master node is designated primary.

            • GroupName replication group name.

            • NodeName node name.

            • NodeHostPort node host and port.

            • HelperHostPor t helper host and port.

            • AllNodesInGroup lists of all nodes in the replication group including their names, hosts and ports.

            For more details about BDBHAMessageStore MBean please refer section Qpid JMX API for HA

          1.6.10. Disk space requirements

          Disk space is a critical resource for the HA Qpid broker.

          In case when a Replica goes down (or falls behind the Master in 2 node cluster where the Master is designated primary) + and the Master continues running, the non-replicated store files are kept on the Masters disk for the period of time + as specified in je.rep.repStreamTimeout JE setting in order to replicate this data later + when the Replica is back. This setting is set to 1 hour by default by the broker. The setting can be overridden as described in + Section 1.6.5.1, “Passing BDB environment and replication configuration options”.

          Depending from the application publishing/consuming rates and message sizes, + the disk space might become overfull during this period of time due to preserved logs. + Please, make sure to allocate enough space on your disk to avoid this from happening. +

          1.6.11. Network Requirements

          The HA Cluster performance depends on the network bandwidth, its use by existing traffic, and quality of service.

          In order to achieve the best performance it is recommended to use a separate network infrastructure for the Qpid HA Nodes + which might include installation of dedicated network hardware on Broker hosts, assigning a higher priority to replication ports, + installing a cluster in a separate network not impacted by any other traffic.

          1.6.12. Security

          At the moment Berkeley replication API supports only TCP/IP protocol to transfer replication data between Master and Replicas.

          As result, the replicated data is unprotected and can be intercepted by anyone having access to the replication network.

          Also, anyone who can access to this network can introduce a new node and therefore receive a copy of the data.

          In order to reduce the security risks the entire HA cluster is recommended to run in a separate network protected from general access.

          1.6.13. Backups

          In order to protect the entire cluster from some cataclysms which might destroy all cluster nodes, + backups of the Master store should be taken on a regular basis.

          Qpid Broker distribution includes the "hot" backup utility backup.sh which can be found at broker bin folder. + This utility can perform the backup when broker is running.

          backup.sh script invokes org.apache.qpid.server.store.berkeleydb.BDBBackup to do the job.

          You can also run this class from command line like in an example below:

          Example 1.4. Performing store backup by using BDBBackup class directly

          + java -cp qpid-bdbstore-0.18.jar org.apache.qpid.server.store.berkeleydb.BDBBackup -fromdir path/to/store/folder -todir path/to/backup/foldeAr

          In the example above BDBBackup utility is called from qpid-bdbstore-0.18.jar to backup the store at path/to/store/folder and copy store logs into path/to/backup/folder.

          Linux and Unix users can take advantage of backup.sh bash script by running this script in a similar way.

          Example 1.5. Performing store backup by using backup.sh bash script

          backup.sh -fromdir path/to/store/folder -todir path/to/backup/folder

          Note

          Do not forget to ensure that the Master store is being backed up, in the event the Node elected Master changes during + the lifecycle of the cluster.

          1.6.14. Migration of a non-HA store to HA

          Non HA stores starting from schema version 4 (0.14 Qpid release) can be automatically converted into HA store on broker startup if replication is first enabled with the DbEnableReplication utility from the BDB JE jar.

          DbEnableReplication converts a non HA store into an HA store and can be used as follows:

          Example 1.6. Enabling replication

          +java -jar je-5.0.48.jar DbEnableReplication -h /path/to/store -groupName MyReplicationGroup -nodeName MyNode1 -nodeHostPort localhost:5001 +

          In the examples above, je jar of version 5.0.48 is used to convert store at /path/to/store into HA store having replication group name MyReplicationGroup, node name MyNode1 and running on host localhost and port 5001.

          After running DbEnableReplication and updating the virtual host store to configuration to be an HA message store, like in example below, + on broker start up the store schema will be upgraded to the most recent version and the broker can be used as normal.

          Example 1.7. Example of XML configuration for HA message store

          +<store>
          +    <class>org.apache.qpid.server.store.berkeleydb.BDBHAMessageStore</class>
          +    <environment-path>/path/to/store</environment-path>
          +    <highAvailability>
          +        <groupName>MyReplicationGroup</groupName>
          +        <nodeName>MyNode1</nodeName>
          +        <nodeHostPort>localhost:5001</nodeHostPort>
          +        <helperHostPort>localhost:5001</helperHostPort>
          +    </highAvailability>
          +</store>

          The Replica nodes can be started with empty stores. The data will be automatically copied from Master to Replica on Replica start-up. + This will take a period of time determined by the size of the Masters store and the network bandwidth between the nodes.

          Note

          Due to existing caveats in Berkeley JE with copying of data from Master into Replica it is recommended to restart the Master node after store schema upgrade is finished before starting the Replica nodes.

          1.6.15. Disaster Recovery

          This section describes the steps required to restore HA broker cluster from backup.

          The detailed instructions how to perform backup on replicated environment can be found here.

          At this point we assume that backups are collected on regular ba sis from Master node.

          Replication configuration of a cluster is stored internally in HA message store. + This information includes IP addresses of the nodes. + In case when HA message store needs to be restored on a different host with a different IP address + the cluster replication configuration should be reseted in this case

          Oracle provides a command line utility DbResetRepGroup + to reset the members of a replication group and replace the group with a new group consisting of a single new member + as described by the arguments supplied to the utility

          Cluster can be restored with the following steps:

          • Copy log files into the store folder from backup

          • Use DbResetRepGroup to reset an existing environment. See an example below

            Example 1.8. Reseting of replication group with DbResetRepGroup

            +java -cp je-5.0.48.jar com.sleepycat.je.rep.util.DbResetRepGroup -h ha-work/Node-5001/bdbstore -groupName TestClusterGroup -nodeName Node-5001 -nodeHostPort localhost:5001

            In the example above DbResetRepGroup utility from Berkeley JE of version 5.0.48 is used to reset the store + at location ha-work/Node-5001/bdbstore and set a replication group to TestClusterGroup + having a node Node-5001 which runs at localhost:5001.

          • Start a broker with HA store configured as specified on running of DbResetRepGroup utility.

          • Start replica nodes having the same replication group and a helper host port pointing to a new master. The store content will be copied into Replicas from Master on their start up.

          1.6.16. Performance

          The aim of this section is not to provide exact performance metrics relating to HA, as this depends heavily on the test + environment, but rather showing an impact of HA on Qpid broker performance in comparison with the Non HA case.

          For testing of impact of HA on a broker performance a special test script was written using Qpid performance test framework. + The script opened a number of connections to the Qpid broker, created producers and consumers on separate connections, + and published test messages with concurrent producers into a test queue and consumed them with concurrent consumers. + The table below shows the number of producers/consumers used in the tests. + The overall throughput was collected for each configuration. +

          Number of producers/consumers in performance tests + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
          TestNumber of producersNumber of consumers
          111
          222
          344
          488
          51616
          63232
          76464

          The test was run against the following Qpid Broker configurations

          • Non HA Broker

          • HA 2 Nodes Cluster with durability SYNC,SYNC,ALL

          • HA 2 Nodes Cluster with durability WRITE_NO_SYNC,WRITE_NO_SYNC,ALL

          • HA 2 Nodes Cluster with durability WRITE_NO_SYNC,WRITE_NO_SYNC,ALL and coalescing-sync Qpid mode

          • HA 2 Nodes Cluster with durability WRITE_NO_SYNC,NO_SYNC,ALL and coalescing-sync Qpid mode

          • HA 2 Nodes Cluster with durability NO_SYNC,NO_SYNC,ALL and coalescing-sync Qpid option

          The evironment used in testing consisted of 2 servers with 4 CPU cores (2x Intel(r) Xeon(R) CPU 5150@2.66GHz), 4GB of RAM + and running under OS Red Hat Enterprise Linux AS release 4 (Nahant Update 4). Network bandwidth was 1Gbit. +

          We ran Master node on the first server and Replica and clients(both consumers and producers) on the second server.

          In non-HA case Qpid Broker was run on a first server and clients were run on a second server.

          The table below contains the test results we measured on this environment for different Broker configurations.

          Each result is represented by throughput value in KB/second and difference in % between HA configuration and non HA case for the same number of clients.

          Performance Comparison + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
          Test/BrokerNo HASYNC, SYNC, ALLWRITE_NO_SYNC, WRITE_NO_SYNC, ALLWRITE_NO_SYNC, WRITE_NO_SYNC, ALL - coalescing-syncWRITE_NO_SYNC, NO_SYNC,ALL - coalescing-syncNO_SYNC, NO_SYNC, ALL - coalescing-sync
          1 (1/1)0.0%-61.4%117.0%-16.02%-9.58%-25.47%
          2 (2/2)0.0%-75.43%67.87%-66.6%-69.02%-30.43%
          3 (4/4)0.0%-84.89%24.19%-71.02%-69.37%-43.67%
          4 (8/8)0.0%-91.17%-22.97%-82.32%-83.42%-55.5%
          5 (16/16)0.0%-91.16%-21.42%-86.6%-86.37%-46.99%
          6 (32/32)0.0%-94.83%-51.51%-92.15%-92.02%-57.59%
          7 (64/64)0.0%-94.2%-41.84%-89.55%-89.55%-50.54%

          The figure below depicts the graphs for the performance test results

          Figure 1.8. Test results

          Test results

          On using durability SYNC,SYNC,ALL (without coalescing-sync) the performance drops significantly (by 62-95%) in comparison with non HA broker.

          Whilst, on using durability WRITE_NO_SYNC,WRITE_NO_SYNC,ALL (without coalescing-sync) the performance drops by only half, but with loss of durability guarantee, so is not recommended.

          In order to have better performance with HA, Qpid Broker comes up with the special mode called coalescing-sync, + With this mode enabled, Qpid broker batches the concurrent transaction commits and syncs transaction data into Master disk in one go. + As result, the HA performance only drops by 25-60% for durability NO_SYNC,NO_SYNC,ALL and by 10-90% for WRITE_NO_SYNC,WRITE_NO_SYNC,ALL.



          [1] The automatic failover feature is available only for AMQP connections from the Java client. Management connections (JMX) + do not current offer this feature.

          Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html ------------------------------------------------------------------------------ svn:eol-style = native Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html ------------------------------------------------------------------------------ svn:keywords = Rev Date Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/ch01s06.html ------------------------------------------------------------------------------ svn:mime-type = text/html Modified: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/css/style.css URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/css/style.css?rev=1368244&r1=1368243&r2=1368244&view=diff ============================================================================== --- qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/css/style.css (original) +++ qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/css/style.css Wed Aug 1 20:54:46 2012 @@ -35,15 +35,17 @@ th { } body { - width:950px; - margin-left:100px; - margin-top:40px; - + margin:0; background:#FFFFFF; font-family:"Verdana", sans-serif; font-size:10pt; } +.container { + width:950px; + margin:0 auto; +} + body a { color:#000000; } Added: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113098.png URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113098.png?rev=1368244&view=auto ============================================================================== Binary file - no diff available. Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113098.png ------------------------------------------------------------------------------ svn:mime-type = image/png Added: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113099.png URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113099.png?rev=1368244&view=auto ============================================================================== Binary file - no diff available. Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113099.png ------------------------------------------------------------------------------ svn:mime-type = image/png Added: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113100.png URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113100.png?rev=1368244&view=auto ============================================================================== Binary file - no diff available. Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113100.png ------------------------------------------------------------------------------ svn:mime-type = image/png Added: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113101.png URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113101.png?rev=1368244&view=auto ============================================================================== Binary file - no diff available. Propchange: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113101.png ------------------------------------------------------------------------------ svn:mime-type = image/png Added: qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113102.png URL: http://svn.apache.org/viewvc/qpid/site/docs/books/trunk/AMQP-Messaging-Broker-Java-Book/html/images/3113102.png?rev=1368244&view=auto ============================================================================== Binary file - no diff available. --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@qpid.apache.org For additional commands, e-mail: commits-help@qpid.apache.org