activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r861338 - in /websites/production/activemq/content: cache/main.pageCache leveldb-store.html persistence.html replicated-leveldb-store.html
Date Wed, 08 May 2013 12:22:51 GMT
Author: buildbot
Date: Wed May  8 12:22:51 2013
New Revision: 861338

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/leveldb-store.html
    websites/production/activemq/content/persistence.html
    websites/production/activemq/content/replicated-leveldb-store.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/leveldb-store.html
==============================================================================
--- websites/production/activemq/content/leveldb-store.html (original)
+++ websites/production/activemq/content/leveldb-store.html Wed May  8 12:22:51 2013
@@ -72,22 +72,14 @@
         <tr>
         <td valign="top" width="100%">
           <div class="wiki-content maincontent">
-<div class="panelMacro"><table class="infoMacro"><colgroup span="1"><col
span="1" width="24"><col span="1"></colgroup><tr><td colspan="1" rowspan="1"
valign="top"><img align="middle" src="https://cwiki.apache.org/confluence/images/icons/emoticons/information.gif"
width="16" height="16" alt="" border="0"></td><td colspan="1" rowspan="1"><b>Version
Compatibility</b><br clear="none"></td></tr></table></div>
-<p>Available in ActiveMQ 5.8.0 and newer</p>
+<div class="panelMacro"><table class="infoMacro"><colgroup span="1"><col
span="1" width="24"><col span="1"></colgroup><tr><td colspan="1" rowspan="1"
valign="top"><img align="middle" src="https://cwiki.apache.org/confluence/images/icons/emoticons/information.gif"
width="16" height="16" alt="" border="0"></td><td colspan="1" rowspan="1"><b>Version
Compatibility</b><br clear="none">Available in ActiveMQ 5.8.0 and newer</td></tr></table></div>
 
-<p>LevelDB is a file based persistence database that is local to the message broker
that is using it.<br clear="none">
-It has been optimized to provide even faster persistence than KahaDB.  It's similar to KahahDB
but <br clear="none">
-instead of using a custom B-Tree implementation to index the write ahead logs, it uses <a
shape="rect" class="external-link" href="https://code.google.com/p/leveldb/" rel="nofollow">LevelDB</a><br
clear="none">
-based indexes which have several nice properties due to the 'append only' files access patterns
: </p>
+<p>The LevelDB Store is a file based persistence database that is local to the message
broker that is using it. It has been optimized to provide even faster persistence than KahaDB.
 It's similar to KahahDB but instead of using a custom B-Tree implementation to index the
write ahead logs, it uses <a shape="rect" class="external-link" href="https://code.google.com/p/leveldb/"
rel="nofollow">LevelDB</a> based indexes which have several nice properties due to
the 'append only' files access patterns : </p>
 
 <ul><li>Fast updates (No need to do random disk updates)</li><li>Concurrent
reads</li><li>Fast index snapshots using hard links</li></ul>
 
 
-<p>Both KahaDB and the LevelDB store have to do periodic garbage collection cycles
to determine which <br clear="none">
-log files can deleted.  In the case of KahaDB, this can be quite expensive as you increase<br
clear="none">
-the amount of data stored and can cause read/write stalls while the collection occurs.  The
LevelDB<br clear="none">
-store uses a much cheaper algorithm to determine when log files can be collected and avoids
those<br clear="none">
-stalls.</p>
+<p>Both KahaDB and the LevelDB store have to do periodic garbage collection cycles
to determine which log files can deleted.  In the case of KahaDB, this can be quite expensive
as you increase the amount of data stored and can cause read/write stalls while the collection
occurs.  The LevelDB store uses a much cheaper algorithm to determine when log files can be
collected and avoids those stalls.</p>
 
 <h2><a shape="rect" name="LevelDBStore-Configuration"></a>Configuration</h2>
 

Modified: websites/production/activemq/content/persistence.html
==============================================================================
--- websites/production/activemq/content/persistence.html (original)
+++ websites/production/activemq/content/persistence.html Wed May  8 12:22:51 2013
@@ -72,13 +72,18 @@
         <tr>
         <td valign="top" width="100%">
           <div class="wiki-content maincontent">
-<h2><a shape="rect" name="Persistence-ActiveMQV5"></a>ActiveMQ V5</h2>
+<h2><a shape="rect" name="Persistence-ActiveMQV5.9"></a>ActiveMQ V5.9</h2>
 
-<p>From 5.3 onwards - we recommend you use <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>
- which offers improved scalability and recoverability over the <a shape="rect" href="amq-message-store.html"
title="AMQ Message Store">AMQ Message Store</a>.</p>
+<p>In ActiveMQ 5.9, the <a shape="rect" href="replicated-leveldb-store.html" title="Replicated
LevelDB Store">Replicated LevelDB Store</a> is introduced.  It handles using <a
shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache ZooKeeper</a>
to pick a muster from a set of broker nodes configured to replicate single LevelDB Store.
 Then synchronizes all slave LevelDB Stores with the master keeps them up to date by replicating
all updates to the master.  This might become the preferred <a shape="rect" href="masterslave.html"
title="MasterSlave">Master Slave</a> configuration going forward.</p>
 
-<p>In ActiveMQ 5.8, the <a shape="rect" href="leveldb-store.html" title="LevelDB
Store">LevelDB Store</a> was introduced.  Although not yet the default message store,
it provides better performance than <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>.
 We expect this store implementation become the default in future releases.</p>
+<h2><a shape="rect" name="Persistence-ActiveMQV5.8"></a>ActiveMQ V5.8</h2>
 
-<p>The <a shape="rect" href="amq-message-store.html" title="AMQ Message Store">AMQ
Message Store</a> which although faster than <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>
- does not scales as well as <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>
and recovery times take longer.</p>
+<p>In ActiveMQ 5.8, the <a shape="rect" href="leveldb-store.html" title="LevelDB
Store">LevelDB Store</a> was introduced.  The LevelDB Store is a file based persistence
database. It has been optimized to provide even faster persistence than KahaDB.  Although
not yet the default message store, we expect this store implementation become the default
in future releases.</p>
+
+<h2><a shape="rect" name="Persistence-ActiveMQV5.3"></a>ActiveMQ V5.3</h2>
+
+<p>From 5.3 onwards - we recommend you use <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>
- which offers improved scalability and recoverability over the <a shape="rect" href="amq-message-store.html"
title="AMQ Message Store">AMQ Message Store</a>.<br clear="none">
+The <a shape="rect" href="amq-message-store.html" title="AMQ Message Store">AMQ Message
Store</a> which although faster than <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>
- does not scales as well as <a shape="rect" href="kahadb.html" title="KahaDB">KahaDB</a>
and recovery times take longer.</p>
 
 <h2><a shape="rect" name="Persistence-ActiveMQV4"></a>ActiveMQ V4</h2>
 <p>For long term persistence we recommend using JDBC coupled with our high performance
journal. You can use just JDBC if you wish but its quite slow.</p>

Modified: websites/production/activemq/content/replicated-leveldb-store.html
==============================================================================
--- websites/production/activemq/content/replicated-leveldb-store.html (original)
+++ websites/production/activemq/content/replicated-leveldb-store.html Wed May  8 12:22:51
2013
@@ -76,35 +76,23 @@
 
 <h2><a shape="rect" name="ReplicatedLevelDBStore-Synopsis"></a>Synopsis
</h2>
 
-<p>The Replicated LevelDB Store is just like the <a shape="rect" href="leveldb-store.html"
title="LevelDB Store">LevelDB Store</a> but it also replicates updates<br clear="none">
-to other ActiveMQ nodes so that you don't loose messages if one of the Broker nodes die.
 </p>
+<p>The Replicated LevelDB Store uses Apache ZooKeeper to pick a muster from a set of
broker nodes configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB
Stores with the master keeps them up to date by replicating all updates to the master.</p>
+
+<p>The Replicated LevelDB Store uses the same data files as a LevelDB Store, so you
can switch a broker configuration between replicated and non replicated whenever you want.</p>
 
 <h2><a shape="rect" name="ReplicatedLevelDBStore-Howitworks."></a>How it
works.</h2>
 
 <p><span class="image-wrap" style=""><img src="replicated-leveldb-store.data/replicated-leveldb-store.png"
style="border: 0px solid black"></span></p>
 
-<p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache
ZooKeeper</a> to coordinate which node in the cluster becomes<br clear="none">
-the master.  The elected master broker node starts and accepts client connections.<br
clear="none">
-The other nodes go into slave mode and connect the the master and synchronize their persistent<br
clear="none">
-state /w it.  The slave nodes do not accept client connections.  All persistent operations
are <br clear="none">
-replicated to the connected slaves.  If the master dies,<br clear="none">
-the slaves with the latest update gets promoted to become the master.  The failed node can
then <br clear="none">
-be brought back online and it will go into slave mode.</p>
-
-<p>All messaging operations which require a sync to disk will wait for the update to
be replicated to a quorum<br clear="none">
-of the nodes before completing.  So if you configure the store with <tt>replicas="3"</tt>
then the quorum<br clear="none">
-size is <tt>(3/2+1)=2</tt>.  The master will store the update locally and wait
for 1 other slave to store <br clear="none">
-the update before reporting success.</p>
-
-<p>When a new master is elected, you also need at least a quorum of nodes online to
be able to find a <br clear="none">
-node with the lastest updates.  The node with the lastest updates will become the new master.
 Therefore,<br clear="none">
-it's recommend that you run with at least 3 replica nodes so that you can take one down without
suffering<br clear="none">
-a service outage.</p>
+<p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache
ZooKeeper</a> to coordinate which node in the cluster becomes the master.  The elected
master broker node starts and accepts client connections. The other nodes go into slave mode
and connect the the master and synchronize their persistent state /w it.  The slave nodes
do not accept client connections.  All persistent operations are replicated to the connected
slaves.  If the master dies, the slaves with the latest update gets promoted to become the
master.  The failed node can then be brought back online and it will go into slave mode.</p>
+
+<p>All messaging operations which require a sync to disk will wait for the update to
be replicated to a quorum of the nodes before completing.  So if you configure the store with
<tt>replicas="3"</tt> then the quorum size is <tt>(3/2+1)=2</tt>.
 The master will store the update locally and wait for 1 other slave to store the update before
reporting success.</p>
+
+<p>When a new master is elected, you also need at least a quorum of nodes online to
be able to find a node with the lastest updates.  The node with the lastest updates will become
the new master.  Therefore, it's recommend that you run with at least 3 replica nodes so that
you can take one down without suffering a service outage.</p>
 
 <h3><a shape="rect" name="ReplicatedLevelDBStore-DeploymentTips"></a>Deployment
Tips</h3>
 
-<p>Clients should be using the <a shape="rect" href="failover-transport-reference.html"
title="Failover Transport Reference">Failover Transport</a> to connect to the broker
<br clear="none">
-nodes in the replication cluster. e.g. using a URL something like the following:</p>
+<p>Clients should be using the <a shape="rect" href="failover-transport-reference.html"
title="Failover Transport Reference">Failover Transport</a> to connect to the broker
nodes in the replication cluster. e.g. using a URL something like the following:</p>
 
 <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent">
 <pre class="code-java">
@@ -112,9 +100,7 @@ failover:(tcp:<span class="code-comment"
 </pre>
 </div></div>
 
-<p>You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is
highly available.  <br clear="none">
-Don't overcommit your ZooKeeper servers.  An overworked ZooKeeper might start thinking live<br
clear="none">
-replication nodes have gone online due to delays in processing their 'keep-alive' messages.</p>
+<p>You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is
highly available. Don't overcommit your ZooKeeper servers.  An overworked ZooKeeper might
start thinking live replication nodes have gone online due to delays in processing their 'keep-alive'
messages.</p>
 
 <h2><a shape="rect" name="ReplicatedLevelDBStore-Configuration"></a>Configuration</h2>
 
@@ -141,8 +127,7 @@ replication nodes have gone online due t
 
 <h3><a shape="rect" name="ReplicatedLevelDBStore-ReplicatedLevelDBStoreProperties"></a>Replicated
LevelDB Store Properties</h3>
 
-<p>All the broker nodes that are part of the same replication set should have matching
<tt>brokerName</tt> XML attributes.<br clear="none">
-The following configuration properties should be the same on all the broker nodes that are
part of the same replication set:</p>
+<p>All the broker nodes that are part of the same replication set should have matching
<tt>brokerName</tt> XML attributes. The following configuration properties should
be the same on all the broker nodes that are part of the same replication set:</p>
 
 <div class="table-wrap">
 <table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1" class="confluenceTh">
default value </th><th colspan="1" rowspan="1" class="confluenceTh"> Comments
</th></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> replicas
</td><td colspan="1" rowspan="1" class="confluenceTd"> 2 </td><td colspan="1"
rowspan="1" class="confluenceTd"> The number of store replicas that will exist in the cluster.
 At least (replicas/2)+1 nodes must be online to avoid service outage. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> securityToken </td><td colspan="1"
rowspan="1" class="confluenceTd">&#160;</td><td colspan="1" rowspan="1" class="confluenceTd">
A security token which must match on all replication nodes for them to accept each others
replication requests. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
zkAddress </td><td colspan="1" rowspan="1" class="confluenceTd">
  127.0.0.1:2181 </td><td colspan="1" rowspan="1" class="confluenceTd"> A comma
separated list of ZooKeeper servers. </td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"> zkPassword </td><td colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td
colspan="1" rowspan="1" class="confluenceTd"> The password to use when connecting to the
ZooKeeper server. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
zkPath </td><td colspan="1" rowspan="1" class="confluenceTd"> /default </td><td
colspan="1" rowspan="1" class="confluenceTd"> The path to the ZooKeeper directory where
Master/Slave election information will be exchanged. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> zkSessionTmeout </td><td colspan="1"
rowspan="1" class="confluenceTd"> 2s </td><td colspan="1" rowspan="1" class="confluenceTd">
How quickly a node failure will be detected by ZooKeeper. </td></tr></tbody></table>



Mime
View raw message