activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r860565 - in /websites/production/activemq/content: cache/main.pageCache replicated-leveldb-store.html
Date Thu, 02 May 2013 08:21:46 GMT
Author: buildbot
Date: Thu May  2 08:21:44 2013
New Revision: 860565

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/replicated-leveldb-store.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/replicated-leveldb-store.html
==============================================================================
--- websites/production/activemq/content/replicated-leveldb-store.html (original)
+++ websites/production/activemq/content/replicated-leveldb-store.html Thu May  2 08:21:44
2013
@@ -84,18 +84,18 @@ to other ActiveMQ nodes so that you don'
 <p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache
ZooKeeper</a> to coordinate which node in the cluster becomes<br clear="none">
 the master.  The elected master broker node starts and accepts client connections.<br
clear="none">
 The other nodes go into slave mode and connect the the master and synchronize their persistent<br
clear="none">
-state /w it.  The salve nodes do not accept client connections.  All persistent operations
are <br clear="none">
+state /w it.  The slave nodes do not accept client connections.  All persistent operations
are <br clear="none">
 replicated to the connected slaves.  If the master dies,<br clear="none">
 the slaves with the latest update gets promoted to become the master.  The failed node can
then <br clear="none">
 be brought back online and it will go into slave mode.</p>
 
-<p>All messaging operations which require a sync to disk will wait the update to be
replicated to a quorum<br clear="none">
+<p>All messaging operations which require a sync to disk will wait for the update to
be replicated to a quorum<br clear="none">
 of the nodes before completing.  So if you configure the store with <tt>replicas="3"</tt>
then the quorum<br clear="none">
 size is <tt>(3/2+1)=2</tt>.  The master will store the update locally and wait
for 1 other slave to store <br clear="none">
 the update before reporting success.</p>
 
 <p>When a new master is elected, you also need at least a quorum of nodes online to
be able to find a <br clear="none">
-node with the lastest updates.  The node with the laste updates will become the new master.
 Therefore,<br clear="none">
+node with the lastest updates.  The node with the lastest updates will become the new master.
 Therefore,<br clear="none">
 it's recommend that you run with at least 3 replica nodes so that you can take one down without
suffering<br clear="none">
 a service outage.</p>
 



Mime
View raw message