Return-Path: X-Original-To: apmail-activemq-commits-archive@www.apache.org Delivered-To: apmail-activemq-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CD7A817557 for ; Tue, 11 Nov 2014 11:00:36 +0000 (UTC) Received: (qmail 68082 invoked by uid 500); 11 Nov 2014 11:00:34 -0000 Delivered-To: apmail-activemq-commits-archive@activemq.apache.org Received: (qmail 67981 invoked by uid 500); 11 Nov 2014 11:00:34 -0000 Mailing-List: contact commits-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list commits@activemq.apache.org Received: (qmail 66830 invoked by uid 99); 11 Nov 2014 11:00:33 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Nov 2014 11:00:33 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id B3A4F9ABD50; Tue, 11 Nov 2014 11:00:33 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: andytaylor@apache.org To: commits@activemq.apache.org Date: Tue, 11 Nov 2014 11:01:18 -0000 Message-Id: <8a6eee20c89f484c9a9488937d29c8db@git.apache.org> In-Reply-To: <8993358cfa3a48d182aa897a718a8a96@git.apache.org> References: <8993358cfa3a48d182aa897a718a8a96@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [48/51] [partial] activemq-6 git commit: ACTIVEMQ6-2 Update to HQ master http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/clustered/hornetq-users.xml ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/clustered/hornetq-users.xml b/distribution/hornetq/src/main/resources/config/trunk/clustered/hornetq-users.xml deleted file mode 100644 index 934306c..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/clustered/hornetq-users.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/clustered/jndi.properties ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/clustered/jndi.properties b/distribution/hornetq/src/main/resources/config/trunk/clustered/jndi.properties deleted file mode 100644 index e2a9832..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/clustered/jndi.properties +++ /dev/null @@ -1,2 +0,0 @@ -java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory -java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/clustered/logging.properties ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/clustered/logging.properties b/distribution/hornetq/src/main/resources/config/trunk/clustered/logging.properties deleted file mode 100644 index dd49ead..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/clustered/logging.properties +++ /dev/null @@ -1,34 +0,0 @@ -############################################################ -# Default Logging Configuration File -# -# You can use a different file by specifying a filename -# with the java.util.logging.config.file system property. -# For example java -Djava.util.logging.config.file=myfile -############################################################ - -############################################################ -# Global properties -############################################################ - -# "handlers" specifies a comma separated list of log Handler -# classes. These handlers will be installed during VM startup. -# Note that these classes must be on the system classpath. -# By default we only configure a ConsoleHandler, which will only -# show messages at the INFO and above levels. -handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler -java.util.logging.ConsoleHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter -java.util.logging.FileHandler.level=INFO -java.util.logging.FileHandler.pattern=logs/hornetq.log -java.util.logging.FileHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter -# Default global logging level. -# This specifies which kinds of events are logged across -# all loggers. For any given facility this global level -# can be overriden by a facility specific level -# Note that the ConsoleHandler also has a separate level -# setting to limit messages printed to the console. -.level= INFO - -############################################################ -# Handler specific properties. -# Describes specific configuration info for Handlers. -############################################################ http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-beans.xml ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-beans.xml b/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-beans.xml deleted file mode 100644 index 195019f..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-beans.xml +++ /dev/null @@ -1,60 +0,0 @@ - - - - - - - - - - - - 1099 - localhost - 1098 - localhost - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-configuration.xml ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-configuration.xml b/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-configuration.xml deleted file mode 100644 index d6788f3..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-configuration.xml +++ /dev/null @@ -1,59 +0,0 @@ - - - 10 - - - - org.hornetq.core.remoting.impl.netty.NettyConnectorFactory - - - - - - org.hornetq.core.remoting.impl.netty.NettyConnectorFactory - - - - - - - - - org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory - - - - - - org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory - - - - - - - - - - - - - - - - - - - - jms.queue.DLQ - jms.queue.ExpiryQueue - 0 - 10485760 - 10 - BLOCK - - - - http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-jms.xml ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-jms.xml b/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-jms.xml deleted file mode 100644 index 3a3dbeb..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-jms.xml +++ /dev/null @@ -1,40 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-users.xml ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-users.xml b/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-users.xml deleted file mode 100644 index 934306c..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/hornetq-users.xml +++ /dev/null @@ -1,7 +0,0 @@ - - - - - - \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/non-clustered/jndi.properties ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/jndi.properties b/distribution/hornetq/src/main/resources/config/trunk/non-clustered/jndi.properties deleted file mode 100644 index e2a9832..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/jndi.properties +++ /dev/null @@ -1,2 +0,0 @@ -java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory -java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/config/trunk/non-clustered/logging.properties ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/logging.properties b/distribution/hornetq/src/main/resources/config/trunk/non-clustered/logging.properties deleted file mode 100644 index 00f9c3d..0000000 --- a/distribution/hornetq/src/main/resources/config/trunk/non-clustered/logging.properties +++ /dev/null @@ -1,38 +0,0 @@ -############################################################ -# Default Logging Configuration File -# -# You can use a different file by specifying a filename -# with the java.util.logging.config.file system property. -# For example java -Djava.util.logging.config.file=myfile -############################################################ - -############################################################ -# Global properties -############################################################ - -# "handlers" specifies a comma separated list of log Handler -# classes. These handlers will be installed during VM startup. -# Note that these classes must be on the system classpath. -# By default we only configure a ConsoleHandler, which will only -# show messages at the INFO and above levels. -handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler -java.util.logging.ConsoleHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter -java.util.logging.FileHandler.level=INFO -java.util.logging.FileHandler.formatter=org.hornetq.integration.logging.HornetQLoggerFormatter -# cycle through 10 files of 20MiB max which append logs -java.util.logging.FileHandler.count=10 -java.util.logging.FileHandler.limit=20971520 -java.util.logging.FileHandler.append=true -java.util.logging.FileHandler.pattern=logs/hornetq.%g.log -# Default global logging level. -# This specifies which kinds of events are logged across -# all loggers. For any given facility this global level -# can be overriden by a facility specific level -# Note that the ConsoleHandler also has a separate level -# setting to limit messages printed to the console. -.level= INFO - -############################################################ -# Handler specific properties. -# Describes specific configuration info for Handlers. -############################################################ http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/hornetq/src/main/resources/examples/common/config/ant.properties ---------------------------------------------------------------------- diff --git a/distribution/hornetq/src/main/resources/examples/common/config/ant.properties b/distribution/hornetq/src/main/resources/examples/common/config/ant.properties deleted file mode 100644 index 88ef1a7..0000000 --- a/distribution/hornetq/src/main/resources/examples/common/config/ant.properties +++ /dev/null @@ -1,4 +0,0 @@ -hornetq.example.logserveroutput=true -hornetq.jars.dir=${imported.basedir}/../../lib -jars.dir=${imported.basedir}/../../lib -aio.library.path=${imported.basedir}/../../bin \ No newline at end of file http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/jboss-mc/pom.xml ---------------------------------------------------------------------- diff --git a/distribution/jboss-mc/pom.xml b/distribution/jboss-mc/pom.xml deleted file mode 100644 index 1a3fc0a..0000000 --- a/distribution/jboss-mc/pom.xml +++ /dev/null @@ -1,123 +0,0 @@ - - 4.0.0 - - - - org.hornetq - hornetq-distribution - 2.5.0-SNAPSHOT - - - jboss-mc - jar - JBoss Microcontainer jar - - - - - org.jboss.microcontainer - jboss-kernel - - - org.jboss.microcontainer - jboss-dependency - - - org.jboss - jboss-reflect - - - org.jboss - jboss-common-core - - - org.jboss - jboss-mdr - - - org.jboss - jbossxb - - - sun-jaxb - jaxb-api - - - org.jboss.logging - jboss-logging - - - org.jboss.logmanager - jboss-logmanager - - - - - - - src/main/resources - true - - - - - org.apache.maven.plugins - maven-shade-plugin - - - package - - shade - - - - - org.jboss.netty:netty - org.jboss.logging:jboss-logging-spi - - - - - - org.jboss.microcontainer:jboss-kernel - - - org.jboss.microcontainer:jboss-dependency - - - org.jboss:jboss-reflect - - - org.jboss:jboss-common-core - - - org.jboss:jboss-mdr - - - org.jboss:jbossxb - - - sun-jaxb:jaxb-api - - - org.jboss.logging:jboss-logging - - - org.jboss.logmanager:jboss-logmanager - - - - - - - - - - - http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/distribution/pom.xml ---------------------------------------------------------------------- diff --git a/distribution/pom.xml b/distribution/pom.xml index 498d761..538e685 100644 --- a/distribution/pom.xml +++ b/distribution/pom.xml @@ -34,7 +34,6 @@ jnp-client - jboss-mc hornetq http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/quickstart-guide/en/running.xml ---------------------------------------------------------------------- diff --git a/docs/quickstart-guide/en/running.xml b/docs/quickstart-guide/en/running.xml index 39f8797..de0ba4f 100644 --- a/docs/quickstart-guide/en/running.xml +++ b/docs/quickstart-guide/en/running.xml @@ -26,21 +26,21 @@
Standalone HornetQ To run a stand-alone server, open up a shell or command prompt and navigate into the - bin directory. Then execute ./run.sh (or run.bat on Windows) and you should see the following output + bin directory. Then execute ./hornetq run (or ./hornetq.cmd run on Windows) and you should see the following output - bin$ ./run.sh - - 15:05:54,108 INFO @main [HornetQBootstrapServer] Starting HornetQ server + bin$ ./hornetq run + + 11:05:06,589 INFO [org.hornetq.integration.bootstrap] HQ101000: Starting HornetQ Server ... - 15:06:02,566 INFO @main [HornetQServerImpl] HornetQ Server version - 2.0.0.CR3 (yellowjacket, 111) started + 11:05:10,848 INFO [org.hornetq.core.server] HQ221001: HornetQ Server version 2.5.0.SNAPSHOT (Wild Hornet, 125) [e32ae252-52ee-11e4-a716-7785dc3013a3] HornetQ is now running. Both the run and the stop scripts use the config under config/stand-alone/non-clustered by default. The configuration can be changed - by running ./run.sh ../config/stand-alone/clustered or another config of - your choosing. This is the same for the stop script and the windows bat files. + >config/non-clustered by default. The configuration can be changed + by running ./hornetq run xml:../config/non-clustered/bootstrap.xml or another config of + your choosing. + The server can be stopped by running ./hornetq stop
HornetQ In Wildfly http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/HornetQ_User_Manual.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/HornetQ_User_Manual.xml b/docs/user-manual/en/HornetQ_User_Manual.xml index 3ddf954..a726cfc 100644 --- a/docs/user-manual/en/HornetQ_User_Manual.xml +++ b/docs/user-manual/en/HornetQ_User_Manual.xml @@ -36,6 +36,7 @@ + http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/clusters.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/clusters.xml b/docs/user-manual/en/clusters.xml index 25e79b2..2dd1896 100644 --- a/docs/user-manual/en/clusters.xml +++ b/docs/user-manual/en/clusters.xml @@ -594,11 +594,13 @@ ClientSession session = factory.createSession(); shows all the available configuration options - address. Each cluster connection only applies to - messages sent to an address that starts with this value. Note: this does - not use wild-card matching. - In this case, this cluster connection will load balance messages sent to - address that start with jms. This cluster connection, + address Each cluster connection only applies to addresses that match the + specified address field. An address is matched on the cluster connection when it begins with the + string specified in this field. The address field on a cluster connection also supports comma + separated lists and an exclude syntax '!'. To prevent an address from being matched on this + cluster connection, prepend a cluster connection address string with '!'. + In the case shown above the cluster connection will load balance messages sent to + addresses that start with jms. This cluster connection, will, in effect apply to all JMS queues and topics since they map to core queues that start with the substring "jms". The address can be any value and you can have many cluster connections @@ -611,6 +613,24 @@ ClientSession session = factory.createSession(); values of address, e.g. "europe" and "europe.news" since this could result in the same messages being distributed between more than one cluster connection, possibly resulting in duplicate deliveries. + + Examples: + + 'jms.eu' matches all addresses starting with 'jms.eu' + '!jms.eu' matches all address except for those starting with + 'jms.eu' + 'jms.eu.uk,jms.eu.de' matches all addresses starting with either + 'jms.eu.uk' or 'jms.eu.de' + 'jms.eu,!jms.eu.uk' matches all addresses starting with 'jms.eu' + but not those starting with 'jms.eu.uk' + + Notes: + + Address exclusion will always takes precedence over address inclusion. + Address matching on cluster connections does not support wild-card matching. + + + This parameter is mandatory. http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/configuring-transports.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/configuring-transports.xml b/docs/user-manual/en/configuring-transports.xml index 066e783..d2f4143 100644 --- a/docs/user-manual/en/configuring-transports.xml +++ b/docs/user-manual/en/configuring-transports.xml @@ -177,10 +177,10 @@ etc Java IO, or NIO (non-blocking), also to use straightforward TCP sockets, SSL, or to tunnel over HTTP or HTTPS.. We believe this caters for the vast majority of transport requirements. -
+
Single Port Support As of version 2.4 HornetQ now supports using a single port for all protocols, HornetQ will automatically - detect which protocol is being used CORE, AMQP or STOMP and use the appropriate HornetQ handler. It will also detect + detect which protocol is being used CORE, AMQP, STOMP or OPENWIRE and use the appropriate HornetQ handler. It will also detect whether protocols such as HTTP or Web Sockets are being used and also use the appropriate decoders It is possible to limit which protocols are supported by using the protocols parameter on the Acceptor like so: http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/core-bridges.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/core-bridges.xml b/docs/user-manual/en/core-bridges.xml index 9004e7f..822510a 100644 --- a/docs/user-manual/en/core-bridges.xml +++ b/docs/user-manual/en/core-bridges.xml @@ -197,11 +197,13 @@ connection used to forward messages to the target node. This attribute is described in section - When using the bridge to forward messages from a queue which has a - max-size-bytes set it's important that confirmation-window-size is less than - or equal to max-size-bytes to prevent the flow of - messages from ceasing. - + When using the bridge to forward messages to an address which uses + the BLOCK address-full-policy from a + queue which has a max-size-bytes set it's important that + confirmation-window-size is less than or equal to + max-size-bytes to prevent the flow of messages from + ceasing. + http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/diagrams/ha-colocated.odg ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/diagrams/ha-colocated.odg b/docs/user-manual/en/diagrams/ha-colocated.odg new file mode 100644 index 0000000..e464bb7 Binary files /dev/null and b/docs/user-manual/en/diagrams/ha-colocated.odg differ http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/diagrams/ha-scaledown.odg ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/diagrams/ha-scaledown.odg b/docs/user-manual/en/diagrams/ha-scaledown.odg new file mode 100644 index 0000000..933829f Binary files /dev/null and b/docs/user-manual/en/diagrams/ha-scaledown.odg differ http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/examples.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/examples.xml b/docs/user-manual/en/examples.xml index cec4021..d202c89 100644 --- a/docs/user-manual/en/examples.xml +++ b/docs/user-manual/en/examples.xml @@ -401,6 +401,11 @@ sessions are used, once and only once message delivery is not guaranteed and it is possible that some messages will be lost or delivered twice.
+
+ OpenWire + The Openwire example shows how to configure a HornetQ + server to communicate with an ActiveMQ JMS client that uses open-wire protocol. +
Paging The paging example shows how HornetQ can support huge queues http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/ha.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/ha.xml b/docs/user-manual/en/ha.xml index f4b2d2b..f6772f4 100644 --- a/docs/user-manual/en/ha.xml +++ b/docs/user-manual/en/ha.xml @@ -30,7 +30,6 @@ A part of high availability is failover which we define as the ability for client connections to migrate from one server to another in event of server failure so client applications can continue to operate. -
Live - Backup Groups @@ -48,14 +47,71 @@ live server goes down, if the current live server is configured to allow automatic failback then it will detect the live server coming back up and automatically stop. -
- HA modes +
+ HA Policies HornetQ supports two different strategies for backing up a server shared - store and replication. + store and replication. Which is configured via the + ha-policy configuration element. + +<ha-policy> + <replication/> +</ha-policy> + + + or + + +<ha-policy> + <shared-store/> +</ha-policy> + + + As well as these 2 strategies there is also a 3rd called live-only. This of course means there + will be no Backup Strategy and is the default if none is provided, however this is used to configure + scale-down which we will cover in a later chapter. + + + + The ha-policy configurations replaces any current HA configuration in the root of the + hornetq-configuration.xml configuration. All old configuration is now deprecated altho + best efforts will be made to honour it if configured this way. + + Only persistent message data will survive failover. Any non persistent message data will not be available after failover. + The ha-policy type configures which strategy a cluster should use to provide the + backing up of a servers data. Within this configuration element is configured how a server should behave + within the cluster, either as a master (live), slave (backup) or colocated (both live and backup). This + would look something like: + +<ha-policy> + <replication> + <master/> + </replication> +</ha-policy> + + + or + + +<ha-policy> + <shared-store/> + <slave/> + </shared-store/> +</ha-policy> + + + or + + +<ha-policy> + <replication> + <colocated/> + </replication> +</ha-policy> +
@@ -81,7 +137,7 @@ the one at the live's storage. If you configure your live server to perform a 'fail-back' when restarted, it will synchronize its data with the backup's. If both servers are shutdown, the administrator will have - to determine which one has the lastest data. + to determine which one has the latest data. The replicating live and backup pair must be part of a cluster. The Cluster Connection also defines how backup servers will find the remote live servers to pair @@ -104,39 +160,40 @@ specifying a node group. You can specify a group of live servers that a backup - server can connect to. This is done by configuring backup-group-name in the main + server can connect to. This is done by configuring group-name in either the master + or the slave element of the hornetq-configuration.xml. A Backup server will only connect to a live server that shares the same node group name - connecting to any live. Simply put not configuring backup-group-name - will allow a backup server to connect to any live server + connecting to any live. This will be the behaviour if group-name + is not configured allowing a backup server to connect to any live server - A backup-group-name example: suppose you have 5 live servers and 6 backup + A group-name example: suppose you have 5 live servers and 6 backup servers: live1, live2, live3: with - backup-group-name=fish + group-name=fish - live4, live5: with backup-group-name=bird + live4, live5: with group-name=bird backup1, backup2, backup3, - backup4: with backup-group-name=fish + backup4: with group-name=fish backup5, backup6: with - backup-group-name=bird + group-name=bird - After joining the cluster the backups with backup-group-name=fish will - search for live servers with backup-group-name=fish to pair with. Since there + After joining the cluster the backups with group-name=fish will + search for live servers with group-name=fish to pair with. Since there is one backup too many, the fish will remain with one spare backup. - The 2 backups with backup-group-name=bird (backup5 and + The 2 backups with group-name=bird (backup5 and backup6) will pair with live servers live4 and live5. @@ -145,14 +202,14 @@ configured. If no live server is available it will wait until the cluster topology changes and repeats the process. - This is an important distinction from a shared-store backup, as in that case if - the backup starts and does not find its live server, the server will just activate - and start to serve client requests. In the replication case, the backup just keeps - waiting for a live server to pair with. Notice that in replication the backup server + This is an important distinction from a shared-store backup, if a backup starts and does not find + a live server, the server will just activate and start to serve client requests. + In the replication case, the backup just keeps + waiting for a live server to pair with. Note that in replication the backup server does not know whether any data it might have is up to date, so it really cannot decide to activate automatically. To activate a replicating backup server using the data - it has, the administrator must change its configuration to make a live server of it, - that change backup=true to backup=false. + it has, the administrator must change its configuration to make it a live server by changing + slave to master. Much like in the shared-store case, when the live server stops or crashes, @@ -169,12 +226,14 @@ Configuration To configure the live and backup servers to be a replicating pair, configure - both servers' hornetq-configuration.xml to have: + the live server in ' hornetq-configuration.xml to have: -<!-- FOR BOTH LIVE AND BACKUP SERVERS' --> -<shared-store>false</shared-store> -. +<ha-policy> + <replication> + <master/> + </replication> +</ha-policy> . <cluster-connections> <cluster-connection name="my-cluster"> @@ -183,12 +242,95 @@ </cluster-connections> - The backup server must also be configured as a backup. + The backup server must be similarly configured but as a slave -<backup>true</backup> - +<ha-policy> + <replication> + <slave/> + </replication> +</ha-policy>
+
+ All Replication Configuration + + The following table lists all the ha-policy configuration elements for HA strategy + Replication for master: + + + + + + + name + Description + + + + + check-for-live-server + Whether to check the cluster for a (live) server using our own server ID when starting + up. This option is only necessary for performing 'fail-back' on replicating servers. + + + cluster-name + Name of the cluster configuration to use for replication. This setting is only necessary if you + configure multiple cluster connections. If configured then the connector configuration of the + cluster configuration with this name will be used when connecting to the cluster to discover + if a live server is already running, see check-for-live-server. If unset then + the default cluster connections configuration is used (the first one configured) + + + group-name + If set, backup servers will only pair with live servers with matching group-name + + + +
+ The following table lists all the ha-policy configuration elements for HA strategy + Replication for slave: + + + + + + + name + Description + + + + + cluster-name + Name of the cluster configuration to use for replication. This setting is only necessary if you + configure multiple cluster connections. If configured then the connector configuration of the + cluster configuration with this name will be used when connecting to the cluster to discover + if a live server is already running, see check-for-live-server. If unset then + the default cluster connections configuration is used (the first one configured) + + + group-name + If set, backup servers will only pair with live servers with matching group-name + + + max-saved-replicated-journals-size + This specifies how many times a replicated backup server can restart after moving its files on start. + Once there are this number of backup journal files the server will stop permanently after if fails + back. + + + allow-failback + Whether a server will automatically stop when a another places a request to take over + its place. The use case is when the backup has failed over + + + failback-delay + delay to wait before fail-back occurs on (failed over live's) restart + + + +
+
@@ -213,20 +355,37 @@ the shared store which can take some time depending on the amount of data in the store. If you require the highest performance during normal operation, have access to - a fast SAN, and can live with a slightly slower failover (depending on amount of - data), we recommend shared store high availability + a fast SAN and live with a slightly slower failover (depending on amount of + data).
Configuration To configure the live and backup servers to share their store, configure - all hornetq-configuration.xml: - -<shared-store>true</shared-store> - - Additionally, each backup server must be flagged explicitly as a backup: - -<backup>true</backup> + id via the ha-policy configuration in hornetq-configuration.xml: + +<ha-policy> + <shared-store> + <master/> + </shared-store> +</ha-policy> +. +<cluster-connections> + <cluster-connection name="my-cluster"> +... + </cluster-connection> +</cluster-connections> + + + The backup server must also be configured as a backup. + + +<ha-policy> + <shared-store> + <slave/> + </shared-store> +</ha-policy> + In order for live - backup groups to operate properly with a shared store, both servers must have configured the location of journal directory to point to the same shared location (as explained in @@ -244,14 +403,57 @@ Failing Back to live Server After a live server has failed and a backup taken has taken over its duties, you may want to restart the live server and have clients fail back. - In case of "shared disk", simply restart the original live - server and kill the new live server. You can do this by killing the process itself or just waiting for the server to crash naturally. - In case of a replicating live server that has been replaced by a remote backup you will need to also set check-for-live-server. This option is necessary because a starting server cannot know whether there is a (remote) server running in its place, so with this option set, the server will check the cluster for another server using its node-ID and if it finds one it will try initiate a fail-back. This option only applies to live servers that are restarting, it is ignored by backup servers. - It is also possible to cause failover to occur on normal server shutdown, to enable - this set the following property to true in the hornetq-configuration.xml - configuration file like so: + In case of "shared disk", simply restart the original live server and kill the new live server by can + do this by killing the process itself. Alternatively you can set allow-fail-back to + true on the slave config which will force the backup that has become live to automatically + stop. This configuration would look like: + +<ha-policy> + <shared-store> + <slave> + <allow-failback>true</allow-failback> + <failback-delay>5000</failback-delay> + </slave> + </shared-store> +</ha-policy> + + The failback-delay configures how long the backup must wait after automatically + stopping before it restarts. This is to gives the live server time to start and obtain its lock. + In replication HA mode you need to set an extra property check-for-live-server + to true in the master configuration. If set to true, during start-up + a live server will first search the cluster for another server using its nodeID. If it finds one, it will + contact this server and try to "fail-back". Since this is a remote replication scenario, the "starting live" + will have to synchronize its data with the server running with its ID, once they are in sync, it will + request the other server (which it assumes it is a back that has assumed its duties) to shutdown for it to + take over. This is necessary because otherwise the live server has no means to know whether there was a + fail-over or not, and if there was if the server that took its duties is still running or not. + To configure this option at your hornetq-configuration.xml configuration file as follows: + +<ha-policy> + <replication> + <master> + <check-for-live-server>true</check-for-live-server> + <master> + </replication> +</ha-policy> + + + Be aware that if you restart a live server while after failover has occurred then this value must be + set to true. If not the live server will restart and server the same + messages that the backup has already handled causing duplicates. + + + It is also possible, in the case of shared store, to cause failover to occur on normal server shutdown, + to enable this set the following property to true in the ha-policy configuration on either + the master or slave like so: -<failover-on-shutdown>true</failover-on-shutdown> +<ha-policy> + <shared-store> + <master> + <failover-on-shutdown>true</failover-on-shutdown> + </master> + </shared-store> +</ha-policy> By default this is set to false, if by some chance you have set this to false but still want to stop the server normally and cause failover then you can do this by using the management API as explained at @@ -259,39 +461,284 @@ the original live server to take over automatically by setting the following property in the hornetq-configuration.xml configuration file as follows: -<allow-failback>true</allow-failback> - In replication HA mode you need to set an extra property check-for-live-server - to true. If set to true, during start-up a live server will first search the cluster for another server using its nodeID. If it finds one, it will contact this server and try to "fail-back". Since this is a remote replication scenario, the "starting live" will have to synchronize its data with the server running with its ID, once they are in sync, it will request the other server (which it assumes it is a back that has assumed its duties) to shutdown for it to take over. This is necessary because otherwise the live server has no means to know whether there was a fail-over or not, and if there was if the server that took its duties is still running or not. To configure this option at your hornetq-configuration.xml configuration file as follows: - -<check-for-live-server>true</check-for-live-server> +<ha-policy> + <shared-store> + <slave> + <allow-failback>true</allow-failback> + </slave> + </shared-store> +</ha-policy> + +
+ All Shared Store Configuration + + The following table lists all the ha-policy configuration elements for HA strategy + shared store for master: + + + + + + + name + Description + + + + + failback-delay + If a backup server is detected as being live, via the lock file, then the live server + will wait announce itself as a backup and wait this amount of time (in ms) before starting as + a live + + + failover-on-server-shutdown + If set to true then when this server is stopped normally the backup will become live + assuming failover. If false then the backup server will remain passive. Note that if false you + want failover to occur the you can use the the management API as explained at + + + +
+ The following table lists all the ha-policy configuration elements for HA strategy + Shared Store for slave: + + + + + + + name + Description + + + + + failover-on-server-shutdown + In the case of a backup that has become live. then when set to true then when this server + is stopped normally the backup will become liveassuming failover. If false then the backup + server will remain passive. Note that if false you want failover to occur the you can use + the the management API as explained at + + + allow-failback + Whether a server will automatically stop when a another places a request to take over + its place. The use case is when the backup has failed over. + + + failback-delay + After failover and the slave has become live, this is set on the new live server. + When starting If a backup server is detected as being live, via the lock file, then the live server + will wait announce itself as a backup and wait this amount of time (in ms) before starting as + a live, however this is unlikely since this backup has just stopped anyway. It is also used + as the delay after failback before this backup will restart (if allow-failback + is set to true. + + + +
+
+
Colocated Backup Servers It is also possible when running standalone to colocate backup servers in the same - JVM as another live server.The colocated backup will become a backup for another live - server in the cluster but not the one it shares the vm with. To configure a colocated - backup server simply add the following to the hornetq-configuration.xml file + JVM as another live server. Live Servers can be configured to request another live server in the cluster + to start a backup server in the same JVM either using shared store or replication. The new backup server + will inherit its configuration from the live server creating it apart from its name, which will be set to + colocated_backup_n where n is the number of backups the server has created, and any directories + and its Connectors and Acceptors which are discussed later on in this chapter. A live server can also + be configured to allow requests from backups and also how many backups a live server can start. this way + you can evenly distribute backups around the cluster. This is configured via the ha-policy + element in the hornetq-configuration.xml file like so: -<backup-servers> - <backup-server name="backup2" inherit-configuration="true" port-offset="1000"> - <configuration> - <bindings-directory>target/server1/data/messaging/bindings</bindings-directory> - <journal-directory>target/server1/data/messaging/journal</journal-directory> - <large-messages-directory>target/server1/data/messaging/largemessages</large-messages-directory> - <paging-directory>target/server1/data/messaging/paging</paging-directory> - </configuration> - </backup-server> -</backup-servers> +<ha-policy> + <replication> + <colocated> + <request-backup>true</request-backup> + <max-backups>1</max-backups> + <backup-request-retries>-1</backup-request-retries> + <backup-request-retry-interval>5000</backup-request-retry-interval> + <master/> + <slave/> + </colocated> + <replication> +</ha-policy> - you will notice 3 attributes on the backup-server, name - which is a unique name used to identify the backup server, inherit-configuration - which if set to true means the server will inherit the configuration of its parent server - and port-offset which is what the port for any netty connectors or - acceptors will be increased by if the configuration is inherited. - it is also possible to configure the backup server in the normal way, in this example you will - notice we have changed the journal directories. + the above example is configured to use replication, in this case the master and + slave configurations must match those for normal replication as in the previous chapter. + shared-store is also supported + + +
+ Configuring Connectors and Acceptors + If the HA Policy is colocated then connectors and acceptors will be inherited from the live server + creating it and offset depending on the setting of backup-port-offset configuration element. + If this is set to say 100 (which is the default) and a connector is using port 5445 then this will be + set to 5545 for the first server created, 5645 for the second and so on. + for INVM connectors and Acceptors the id will have colocated_backup_n appended, + where n is the backup server number. +
+ Remote Connectors + It may be that some of the Connectors configured are for external servers and hence should be excluded from the offset. + for instance a Connector used by the cluster connection to do quorum voting for a replicated backup server, + these can be omitted from being offset by adding them to the ha-policy configuration like so: + +<ha-policy> + <replication> + <colocated> + <excludes> + <connector-ref>remote-connector</connector-ref> + </excludes> +......... +</ha-policy> + +
+
+
+ Configuring Directories + Directories for the Journal, Large messages and Paging will be set according to what the HA strategy is. + If shared store the the requesting server will notify the target server of which directories to use. If replication + is configured then directories will be inherited from the creating server but have the new backups name + appended. +
+ + The following table lists all the ha-policy configuration elements: + + + + + + + name + Description + + + + + request-backup + If true then the server will request a backup on another node + + + backup-request-retries + How many times the live server will try to request a backup, -1 means for ever. + + + backup-request-retry-interval + How long to wait for retries between attempts to request a backup server. + + + max-backups + Whether or not this live server will accept backup requests from other live servers. + + + backup-port-offset + The offset to use for the Connectors and Acceptors when creating a new backup server. + + + +
+
+ Scaling Down + An alternative to using Live/Backup groups is to configure scaledown. when configured for scale down a server + can copy all its messages and transaction state to another live server. The advantage of this is that you dont need + full backups to provide some form of HA, however there are disadvantages with this approach the first being that it + only deals with a server being stopped and not a server crash. The caveat here is if you configure a backup to scale down. + Another disadvantage is that it is possible to lose message ordering. This happens in the following scenario, + say you have 2 live servers and messages are distributed evenly between the servers from a single producer, if one + of the servers scales down then the messages sent back to the other server will be in the queue after the ones + already there, so server 1 could have messages 1,3,5,7,9 and server 2 would have 2,4,6,8,10, if server 2 scales + down the order in server 1 would be 1,3,5,7,9,2,4,6,8,10. + + The configuration for a live server to scale down would be something like: + +<ha-policy> + <live-only> + <scale-down> + <connectors> + <connector-ref>server1-connector</connector-ref> + </connectors> + </scale-down> + </live-only> +</ha-policy> + + In this instance the server is configured to use a specific connector to scale down, if a connector is not + specified then the first INVM connector is chosen, this is to make scale down fromm a backup server easy to configure. + It is also possible to use discovery to scale down, this would look like: + +<ha-policy> + <live-only> + <scale-down> + <discovery-group>my-discovery-group</discovery-group> + </scale-down> + </live-only> +</ha-policy> + +
+ Scale Down with groups + It is also possible to configure servers to only scale down to servers that belong in the same group. This + is done by configuring the group like so: + +<ha-policy> + <live-only> + <scale-down> + ... + <group-name>my-group</group-name> + </scale-down> + </live-only> +</ha-policy> + + In this scenario only servers that belong to the group my-group will be scaled down to +
+
+ Scale Down and Backups + It is also possible to mix scale down with HA via backup servers. If a slave is configured to scale down + then after failover has occurred, instead of starting fully the backup server will immediately scale down to + another live server. The most appropriate configuration for this is using the colocated approach. + it means as you bring up live server they will automatically be backed up by server and as live servers are + shutdown, there messages are made available on another live server. A typical configuration would look like: + +<ha-policy> + <replication> + <colocated> + <backup-request-retries>44</backup-request-retries> + <backup-request-retry-interval>33</backup-request-retry-interval> + <max-backups>3</max-backups> + <request-backup>false</request-backup> + <backup-port-offset>33</backup-port-offset> + <master> + <group-name>purple</group-name> + <check-for-live-server>true</check-for-live-server> + <cluster-name>abcdefg</cluster-name> + </master> + <slave> + <group-name>tiddles</group-name> + <max-saved-replicated-journals-size>22</max-saved-replicated-journals-size> + <cluster-name>33rrrrr</cluster-name> + <restart-backup>false</restart-backup> + <scale-down> + <!--a grouping of servers that can be scaled down to--> + <group-name>boo!</group-name> + <!--either a discovery group--> + <discovery-group>wahey</discovery-group> + </scale-down> + </slave> + </colocated> + </replication> +</ha-policy> + +
+
+ Scale Down and Clients + When a server is stopping and preparing to scale down it will send a message to all its clients informing them + which server it is scaling down to before disconnecting them. At this point the client will reconnect however this + will only succeed once the server has completed scaledown. This is to ensure that any state such as queues or transactions + are there for the client when it reconnects. The normal reconnect settings apply when the client is reconnecting so + these should be high enough to deal with the time needed to scale down. +
+
Failover Modes HornetQ defines two types of client failover: http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/images/ha-colocated.png ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/images/ha-colocated.png b/docs/user-manual/en/images/ha-colocated.png new file mode 100644 index 0000000..e7b2d30 Binary files /dev/null and b/docs/user-manual/en/images/ha-colocated.png differ http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/images/ha-scaledown.png ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/images/ha-scaledown.png b/docs/user-manual/en/images/ha-scaledown.png new file mode 100644 index 0000000..b33f5ce Binary files /dev/null and b/docs/user-manual/en/images/ha-scaledown.png differ http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/interoperability.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/interoperability.xml b/docs/user-manual/en/interoperability.xml index e4261d7..68e2962 100644 --- a/docs/user-manual/en/interoperability.xml +++ b/docs/user-manual/en/interoperability.xml @@ -285,4 +285,23 @@ java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces
+
+ OpenWire + HornetQ now supports the OpenWire + protocol so that an ActiveMQ JMS client can talk directly to a HornetQ server. To enable OpenWire support + you must configure a Netty Acceptor, like so: + +<acceptor name="openwire-acceptor"> +<factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class> +<param key="protocols" value="OPENWIRE"/> +<param key="port" value="61616"/> +</acceptor> + + The HornetQ server will then listens on port 61616 for incoming openwire commands. Please note the "protocols" is not mandatory here. + The openwire configuration conforms to HornetQ's "Single Port" feature. Please refer to + Configuring Single Port for details. + Please refer to the openwire example for more coding details. + Currently we support ActiveMQ clients that using standard JMS APIs. In the future we will get more supports + for some advanced, ActiveMQ specific features into HornetQ. +
http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/management.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/management.xml b/docs/user-manual/en/management.xml index be5cd32..c8ec3a3 100644 --- a/docs/user-manual/en/management.xml +++ b/docs/user-manual/en/management.xml @@ -148,15 +148,16 @@ >core.server).
- It is possible to stop the server and force failover to occur with any currently attached clients. - to do this use the forceFailover() on the It is possible to stop the server and force failover to occur with any currently attached clients. + to do this use the forceFailover() on the HornetQServerControl (with the ObjectName org.hornetq:module=Core,type=Server or the resource name core.server) - - Since this method actually stops the server you will probably receive some sort of error - depending on which management service you use to call it. - + + Since this method actually stops the server you will probably receive some sort of error + depending on which management service you use to call it. + +
@@ -834,7 +835,7 @@ notificationConsumer.setMessageListener(new MessageListener() how to use a JMS MessageListener to receive management notifications from HornetQ server.
-
+
Notification Types and Headers Below is a list of all the different kinds of notifications as well as which headers are on the messages. Every notification has a _HQ_NotifType (value noted in parentheses) @@ -966,6 +967,14 @@ notificationConsumer.setMessageListener(new MessageListener() _HQ_Address, _HQ_Distance + + + CONSUMER_SLOW (21) + _HQ_Address, _HQ_ConsumerCount, + _HQ_RemoteAddress, _HQ_ConnectionName, + _HQ_ConsumerName, _HQ_SessionName + +
http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/queue-attributes.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/queue-attributes.xml b/docs/user-manual/en/queue-attributes.xml index 8ec06b5..f9b2fd8 100644 --- a/docs/user-manual/en/queue-attributes.xml +++ b/docs/user-manual/en/queue-attributes.xml @@ -109,6 +109,9 @@ <redistribution-delay>0</redistribution-delay> <send-to-dla-on-no-route>true</send-to-dla-on-no-route> <address-full-policy>PAGE</address-full-policy> + <slow-consumer-threshold>-1</slow-consumer-threshold> + <slow-consumer-policy>NOTIFY</slow-consumer-policy> + <slow-consumer-check-period>5</slow-consumer-check-period> </address-setting> </address-settings> The idea with address settings, is you can provide a block of settings which will be @@ -154,7 +157,16 @@ See the following chapters for more info , . - - + slow-consumer-threshold. The minimum rate of message consumption allowed before a + consumer is considered "slow." Measured in messages-per-second. Default is -1 (i.e. disabled); any other valid + value must be greater than 0. + slow-consumer-policy. What should happen when a slow consumer is detected. + KILL will kill the consumer's connection (which will obviously impact any other client + threads using that same connection). NOTIFY will send a CONSUMER_SLOW management + notification which an application could receive and take action with. See + for more details on this notification. + slow-consumer-check-period. How often to check for slow consumers on a particular queue. + Measured in minutes. Default is 5. See for more information about slow + consumer detection.
http://git-wip-us.apache.org/repos/asf/activemq-6/blob/177e6820/docs/user-manual/en/slow-consumers.xml ---------------------------------------------------------------------- diff --git a/docs/user-manual/en/slow-consumers.xml b/docs/user-manual/en/slow-consumers.xml new file mode 100644 index 0000000..aef287d --- /dev/null +++ b/docs/user-manual/en/slow-consumers.xml @@ -0,0 +1,53 @@ + + + + + + + + + + + + + + + + + + + + +%BOOK_ENTITIES; +]> + + Detecting Slow Consumers + In this section we will discuss how HornetQ can be configured to deal with slow consumers. A slow consumer with + a server-side queue (e.g. JMS topic subscriber) can pose a significant problem for broker performance. If messages + build up in the consumer's server-side queue then memory will begin filling up and the broker may enter paging + mode which would impact performance negatively. However, criteria can be set so that consumers which don't + acknowledge messages quickly enough can potentially be disconnected from the broker which in the case of a + non-durable JMS subscriber would allow the broker to remove the subscription and all of its messages freeing up + valuable server resources. + +
+ Configuration required for detecting slow consumers + By default the server will not detect slow consumers. If slow consumer detection is desired then see + + for more details. + + The calculation to determine whether or not a consumer is slow only inspects the number of messages a + particular consumer has acknowledged. It doesn't take into account whether or not flow + control has been enabled on the consumer, whether or not the consumer is streaming a large message, etc. Keep + this in mind when configuring slow consumer detection. + + Please note that slow consumer checks are performed using the scheduled thread pool and that each queue on + the broker with slow consumer detection enabled will cause a new entry in the internal + java.util.concurrent.ScheduledThreadPoolExecutor instance. If there are a high number of + queues and the slow-consumer-check-period is relatively low then there may be delays in + executing some of the checks. However, this will not impact the accuracy of the calculations used by the + detection algorithm. See for more details about this pool. + +
+