From commits-return-44034-archive-asf-public=cust-asf.ponee.io@qpid.apache.org Sun Mar 4 22:23:01 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 019F61807D5 for ; Sun, 4 Mar 2018 22:22:56 +0100 (CET) Received: (qmail 7078 invoked by uid 500); 4 Mar 2018 21:22:56 -0000 Mailing-List: contact commits-help@qpid.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@qpid.apache.org Delivered-To: mailing list commits@qpid.apache.org Received: (qmail 6219 invoked by uid 99); 4 Mar 2018 21:22:55 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 04 Mar 2018 21:22:55 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 2C8E9F4EAB; Sun, 4 Mar 2018 21:22:53 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: orudyy@apache.org To: commits@qpid.apache.org Date: Sun, 04 Mar 2018 21:23:19 -0000 Message-Id: <68deb492260240a4a8fb720d880be06f@git.apache.org> In-Reply-To: <024522893f2c4345a9396fbccda7f609@git.apache.org> References: <024522893f2c4345a9396fbccda7f609@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [27/32] qpid-site git commit: QPID-8112: Update site content for Qpid Broker-J 7.0.2 http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-Getting-Started.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-Getting-Started.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-Getting-Started.html new file mode 100644 index 0000000..0fe9d22 --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-Getting-Started.html @@ -0,0 +1,151 @@ + + + + + Chapter 3. Getting Started - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

Chapter 3. Getting Started

3.1. Introduction

+ This section describes how to start and stop the Broker, and outlines the various command line options. +

+ For additional details about the broker configuration store and related command line arguments see + Chapter 5, Initial Configuration. + The broker is fully configurable via its Web Management Console, for details of this see + Section 6.2, “Web Management Console”. +

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Backup.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Backup.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Backup.html new file mode 100644 index 0000000..fc7fbf9 --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Backup.html @@ -0,0 +1,145 @@ + + + + + 10.10. Backups - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

10.10. Backups

It is recommend to use the hot backup script to periodically backup every node in the + group. Section 11.2.2, “BDB-HA”.

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Behaviour.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Behaviour.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Behaviour.html new file mode 100644 index 0000000..e1e1b77 --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Behaviour.html @@ -0,0 +1,226 @@ + + + + + 10.4. Behaviour of the Group - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

10.4. Behaviour of the Group

This section first describes the behaviour of the group in its default configuration. It + then goes on to talk about the various controls that are available to override it. It + describes the controls available that affect the durability of transactions and + the data consistency between the master and replicas and thus make trade offs between + performance and reliability.

10.4.1. Default Behaviour

Let's first look at the behaviour of a group in default configuration.

In the default configuration, for any messaging work to be done, there must be at least + quorum nodes present. This means for example, in a three node group, + this means there must be at least two nodes available.

When a messaging client sends a transaction, it can be assured that, before the control + returns back to his application after the commit call that the following is true:

  • At the master, the transaction is written to disk and OS level caches + are flushed meaning the data is on the storage device.

  • At least quorum minus 1 replicas, acknowledge the receipt of + transaction. The replicas will write the data to the storage device + sometime later.

If there were to be a master failure immediately after the transaction was committed, + the transaction would be held by at least quorum minus one replicas. For example, if we had + a group of three, then we would be assured that at least one replica held the + transaction.

In the event of a master failure, if quorum nodes remain, those nodes hold an election. + The nodes will elect master the node with the most recent transaction. If two or more nodes + have the most recent transaction the group makes an arbitrary choice. If quorum number of + nodes does not remain, the nodes cannot elect a new master and will wait until nodes rejoin. + You will see later that manual controls are available allow service to be restored from + fewer than quorum nodes and to influence which node gets elected in the event of a + tie.

Whenever a group has fewer than quorum nodes present, the virtualhost will be + unavailable and messaging connections will be refused. If quorum disappears at the very + moment a messaging client sends a transaction that transaction will fail.

You will have noticed the difference in the synchronization policies applied the master + and the replicas. The replicas send the acknowledgement back before the data is written to + disk. The master synchronously writes the transaction to storage. This is an example of a + trade off between durability and performance. We will see more about how to control this + trade off later.

10.4.2. Synchronization Policy

The synchronization policy dictates what a node must do when it + receives a transaction before it acknowledges that transaction to the rest of the + group.

The following options are available:

  • SYNC. The node must write the transaction to disk and flush + any OS level buffers before sending the acknowledgement. SYNC is offers the highest + durability but offers the least performance.

  • WRITE_NO_SYNC. The node must write the transaction to disk + before sending the acknowledgement. OS level buffers will be flush as some point + later. This typically provides an assurance against failure of the application but not + the operating system or hardware.

  • NO_SYNC. The node immediately sends the acknowledgement. The + transaction will be written and OS level buffers flushed as some point later. NO_SYNC + offers the highest performance but the lowest durability level. This synchronization + policy is sometimes known as commit to the network.

It is possible to assign a one policy to the master and a different policy to the + replicas. These are configured as attributes on the + virtualhost. By default the master uses SYNC and replicas use + NO_SYNC.

10.4.3. Node Priority

Node priority can be used to influence the behaviour of the election algorithm. It is + useful in the case were you want to favour some nodes over others. For instance, if you wish + to favour nodes located in a particular data centre over those in a remote site.

The following options are available:

  • Highest. Nodes with this priority will be more favoured. In + the event of two or more nodes having the most recent transaction, the node with this + priority will be elected master. If two or more nodes have this priority the algorithm + will make an arbitrary choice.

  • High. Nodes with this priority will be favoured but not as + much so as those with Highest.

  • Normal. This is default election priority.

  • Never. The node will never be elected even if the + node has the most recent transaction. The node will still keep up to date + with the replication stream and will still vote itself, but can just never be + elected.

+

Node priority is configured as an attribute on the + virtualhost node and can be changed at runtime and is effective immediately.

Important

Use of the Never priority can lead to transaction loss. For example, consider a group + of three where replica-2 is marked as Never. If a transaction were to arrive and it be + acknowledged only by Master and Replica-2, the transaction would succeed. Replica 1 is + running behind for some reason (perhaps a full-GC). If a Master failure were to occur at + that moment, the replicas would elect Replica-1 even though Replica-2 had the most recent + transaction.

Transaction loss is reported by message HA-1014.

10.4.4. Required Minimum Number Of Nodes

This controls the required minimum number of nodes to complete a transaction and to + elect a new master. By default, the required number of nodes is set to + Default (which signifies quorum).

It is possible to reduce the required minimum number of nodes. The rationale for doing + this is normally to temporarily restore service from fewer than quorum nodes following an + extraordinary failure.

For example, consider a group of three. If one node were to fail, as quorum still + remained, the system would continue work without any intervention. If the failing node were + the master, a new master would be elected.

What if a further node were to fail? Quorum no longer remains, and the remaining node + would just wait. It cannot elect itself master. What if we wanted to restore service from + just this one node?

In this case, Required Number of Nodes can be reduced to 1 on the remain node, allowing + the node to elect itself and service to be restored from the singleton. Required minimum + number of nodes is configured as an attribute on the + virtualhost node and can be changed at runtime and is effective immediately.

Important

The attribute must be used cautiously. Careless use will lead to lost transactions and + can lead to a split-brain in the event of a network partition. If used to temporarily restore + service from fewer than quorum nodes, it is imperative to revert it + to the Default value as the failed nodes are restored.

Transaction loss is reported by message HA-1014.

10.4.5. Allow to Operate Solo

This attribute only applies to groups containing exactly two nodes.

In a group of two, if a node were to fail then in default configuration work will cease + as quorum no longer exists. A single node cannot elect itself master.

The allow to operate solo flag allows a node in a two node group to elect itself master and + to operate sole. It is configured as an attribute on the + virtualhost node and can be changed at runtime and is effective immediately.

For example, consider a group of two where the master fails. Service will be interrupted + as the remaining node cannot elect itself master. To allow it to become master, apply the + allow to operate solo flag to it. It will elect itself master and work can continue, albeit + from one node.

Important

It is imperative not to allow the allow to operate solo flag to be set on both nodes at once. To + do so will mean, in the event of a network partition, a split-brain will + occur.

Transaction loss is reported by message HA-1014.

10.4.6. Maximum message size

+ Internally, BDB JE HA restricts the maximum size of replication stream records passed from the master + to the replica(s). This helps prevent DOS attacks. + If expected application maximum message size is greater than 5MB, the BDB JE setting + je.rep.maxMessageSize and Qpid context variable qpid.max_message_size + needs to be adjusted to reflect this in order to avoid running into the BDB HA JE limit. +

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-ClientFailover.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-ClientFailover.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-ClientFailover.html new file mode 100644 index 0000000..687597c --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-ClientFailover.html @@ -0,0 +1,148 @@ + + + + + 10.6. Client failover - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

10.6. Client failover

As mentioned above, the clients need to be able to find the location of the active + virtualhost within the group.

Clients can do this using a static technique, for example , utilising the failover feature of the Apache Qpid + JMS and Apache Qpid JMS AMQP 0-x clients where the client has a list of all the nodes, and tries each node in + sequence until it discovers the node with the active virtualhost.

Another possibility is a dynamic technique utilising a proxy or Virtual IP (VIP). These + require other software and/or hardware and are outside the scope of this document.

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-CreatingGroup.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-CreatingGroup.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-CreatingGroup.html new file mode 100644 index 0000000..1897fe0 --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-CreatingGroup.html @@ -0,0 +1,182 @@ + + + + + 10.3. Creating a group - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

10.3. Creating a group

This section describes how to create a group. At a high level, creating a group involves + first creating the first node standalone, then creating subsequent nodes referencing the first + node so the nodes can introduce themselves and gradually the group is built up.

A group is created through either Web Management or the REST API. These instructions + presume you are using Web Management. To illustrate the example it builds the group + illustrated in figure Figure 10.1, “3-node group deployed across three Brokers.”

  1. Install a Broker on each machine that will be used to host the group. As messaging + clients will need to be able to connect to and authentication to all Brokers, it usually + makes sense to choose a common authentication mechanism e.g. Simple LDAP Authentication, + External with SSL client authentication or Kerberos.

  2. Select one Broker instance to host the first node instance. This choice is an + arbitrary one. The node is special only whilst creating group. Once creation is + complete, all nodes will be considered equal.

  3. Click the Add button on the Virtualhost Panel on the Broker + tab.

    +

    1. Give the Virtualhost node a unique name e.g. weather1. The + name must be unique within the group and unique to that Broker. It is best if the + node names are chosen from a different nomenclature than the machine names + themselves.

    2. Choose BDB_HA and select New group +

    3. Give the group a name e.g. weather. The group name must be + unique and will be the name also given to the virtualhost, so this is the name the + messaging clients will use in their connection url.

    4. Give the address of this node. This is an address on this node's host that + will be used for replication purposes. The hostname must be + resolvable by all the other nodes in the group. This is separate from the address + used by messaging clients to connect to the Broker. It is usually best to choose a + symbolic name, rather than an IP address.

    5. Now add the node addresses of all the other nodes that will form the group. In + our example we are building a three node group so we give the node addresses of + chaac:5000 and indra:5000.

    6. Click Add to create the node. The virtualhost node will be created with the + virtualhost. As there is only one node at this stage, the role will be + master.

    +

    Figure 10.2. Creating 1st node in a group

    Creating 1st node in a group


    +

  4. Now move to the second Broker to be the group. Click the Add + button on the Virtualhost Panel on the Broker tab of the second Broker.

    +

    1. Give the Virtualhost node a unique name e.g. + weather2.

    2. Choose BDB_HA and choose Existing group +

    3. Give the details of the existing node. Following our + example, specify weather, weather1 and + thor:5000

    4. Give the address of this node.

    5. Click Add to create the node. The node will use the existing details to + contact it and introduce itself into the group. At this stage, the group will have + two nodes, with the second node in the replica role.

    6. Repeat these steps until you have added all the nodes to the group.

    +

    Figure 10.3. Adding subsequent nodes to the group

    Adding subsequent nodes to the group


    +

The group is now formed and is ready for us. Looking at the virtualhost node of any of the + nodes shows a complete view of the whole group.

Figure 10.4. View of group from one node

View of group from one node


+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-DiskSpace.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-DiskSpace.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-DiskSpace.html new file mode 100644 index 0000000..ea7f28b --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-DiskSpace.html @@ -0,0 +1,147 @@ + + + + + 10.7. Disk space requirements - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

10.7. Disk space requirements

In the case where node in a group are down, the master must keep the data they are missing + for them to allow them to return to the replica role quickly.

By default, the master will retain up to 1hour of missed transactions. In a busy + production system, the disk space occupied could be considerable.

This setting is controlled by virtualhost context variable + je.rep.repStreamTimeout.

+ +
+ + + + +
+
+
+ + http://git-wip-us.apache.org/repos/asf/qpid-site/blob/91891e66/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Network-Requirements.html ---------------------------------------------------------------------- diff --git a/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Network-Requirements.html b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Network-Requirements.html new file mode 100644 index 0000000..f1e2b31 --- /dev/null +++ b/content/releases/qpid-broker-j-7.0.2/book/Java-Broker-High-Availability-Network-Requirements.html @@ -0,0 +1,148 @@ + + + + + 10.8. Network Requirements - Apache Qpid™ + + + + + + + + + + + + + +
+ + + + + + +
+ + +
+

10.8. Network Requirements

The HA Cluster performance depends on the network bandwidth, its use by existing traffic, + and quality of service.

In order to achieve the best performance it is recommended to use a separate network + infrastructure for the Qpid HA Nodes which might include installation of dedicated network + hardware on Broker hosts, assigning a higher priority to replication ports, installing a group + in a separate network not impacted by any other traffic.

+ +
+ + + + +
+
+
+ + --------------------------------------------------------------------- To unsubscribe, e-mail: commits-unsubscribe@qpid.apache.org For additional commands, e-mail: commits-help@qpid.apache.org