activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r869035 - in /websites/production/activemq/content: cache/main.pageCache replicated-leveldb-store.html
Date Wed, 10 Jul 2013 14:21:27 GMT
Author: buildbot
Date: Wed Jul 10 14:21:27 2013
New Revision: 869035

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/replicated-leveldb-store.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/replicated-leveldb-store.html
==============================================================================
--- websites/production/activemq/content/replicated-leveldb-store.html (original)
+++ websites/production/activemq/content/replicated-leveldb-store.html Wed Jul 10 14:21:27
2013
@@ -50,8 +50,8 @@
       <div>
 
 <!-- Banner -->
-
-	<div id="asf_logo">
+<p>
+	</p><div id="asf_logo">
 	<div id="activemq_logo">
             <a shape="rect" style="float:left; width:280px;display:block;text-indent:-5000px;text-decoration:none;line-height:60px;
margin-top:10px; margin-left:100px;" href="http://activemq.apache.org" title="The most popular
and powerful open source Message Broker">ActiveMQ</a> &#8482;
             <a shape="rect" style="float:right; width:210px;display:block;text-indent:-5000px;text-decoration:none;line-height:60px;
margin-top:15px; margin-right:10px;" href="http://www.apache.org" title="The Apache Software
Foundation">ASF</a>
@@ -86,7 +86,7 @@
 
 <p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache
ZooKeeper</a> to coordinate which node in the cluster becomes the master.  The elected
master broker node starts and accepts client connections. The other nodes go into slave mode
and connect the the master and synchronize their persistent state /w it.  The slave nodes
do not accept client connections.  All persistent operations are replicated to the connected
slaves.  If the master dies, the slaves with the latest update gets promoted to become the
master.  The failed node can then be brought back online and it will go into slave mode.</p>
 
-<p>All messaging operations which require a sync to disk will wait for the update to
be replicated to a quorum of the nodes before completing.  So if you configure the store with
<tt>replicas="3"</tt> then the quorum size is <tt>(3/2+1)=2</tt>.
 The master will store the update locally and wait for 1 other slave to store the update before
reporting success.</p>
+<p>All messaging operations which require a sync to disk will wait for the update to
be replicated to a quorum of the nodes before completing.  So if you configure the store with
<tt>replicas="3"</tt> then the quorum size is <tt>(3/2+1)=2</tt>.
 The master will store the update locally and wait for 1 other slave to store the update before
reporting success.  Another way to think about it is that store will do synchronous replication
to a quorum of the replication nodes and asynchronous replication replication to any additional
nodes.</p>
 
 <p>When a new master is elected, you also need at least a quorum of nodes online to
be able to find a node with the lastest updates.  The node with the lastest updates will become
the new master.  Therefore, it's recommend that you run with at least 3 replica nodes so that
you can take one down without suffering a service outage.</p>
 
@@ -95,9 +95,9 @@
 <p>Clients should be using the <a shape="rect" href="failover-transport-reference.html"
title="Failover Transport Reference">Failover Transport</a> to connect to the broker
nodes in the replication cluster. e.g. using a URL something like the following:</p>
 
 <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent">
-<pre class="code-java">
-failover:(tcp:<span class="code-comment">//broker1:61616,tcp://broker2:61616,tcp://broker3:61616)</span>
-</pre>
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
+failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
+]]></script>
 </div></div>
 
 <p>You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is
highly available. Don't overcommit your ZooKeeper servers.  An overworked ZooKeeper might
start thinking live replication nodes have gone online due to delays in processing their 'keep-alive'
messages.</p>
@@ -107,22 +107,22 @@ failover:(tcp:<span class="code-comment"
 <p>You can configure ActiveMQ to use LevelDB for its persistence adapter - like below
:</p>
 
 <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent">
-<pre class="code-java">
-  &lt;broker brokerName=<span class="code-quote">"broker"</span> ... &gt;
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
+  &lt;broker brokerName="broker" ... &gt;
     ...
     &lt;persistenceAdapter&gt;
       &lt;replicatedLevelDB
-        directory=<span class="code-quote">"activemq-data"</span>
-        replicas=<span class="code-quote">"2"</span>
-        bind=<span class="code-quote">"tcp:<span class="code-comment">//0.0.0.0:0"</span>
-</span>        zkAddress=<span class="code-quote">"zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181"</span>
-        zkPassword=<span class="code-quote">"password"</span>
-        zkPath=<span class="code-quote">"/activemq/leveldb-stores"</span>
+        directory="activemq-data"
+        replicas="3"
+        bind="tcp://0.0.0.0:0"
+        zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181"
+        zkPassword="password"
+        zkPath="/activemq/leveldb-stores"
         /&gt;
     &lt;/persistenceAdapter&gt;
     ...
   &lt;/broker&gt;
-</pre>
+]]></script>
 </div></div>
 
 <h3><a shape="rect" name="ReplicatedLevelDBStore-ReplicatedLevelDBStoreProperties"></a>Replicated
LevelDB Store Properties</h3>
@@ -130,7 +130,7 @@ failover:(tcp:<span class="code-comment"
 <p>All the broker nodes that are part of the same replication set should have matching
<tt>brokerName</tt> XML attributes. The following configuration properties should
be the same on all the broker nodes that are part of the same replication set:</p>
 
 <div class="table-wrap">
-<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1" class="confluenceTh">
default value </th><th colspan="1" rowspan="1" class="confluenceTh"> Comments
</th></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> <tt>replicas</tt>
</td><td colspan="1" rowspan="1" class="confluenceTd"> <tt>2</tt>
</td><td colspan="1" rowspan="1" class="confluenceTd"> The number of store replicas
that will exist in the cluster.  At least (replicas/2)+1 nodes must be online to avoid service
outage. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
<tt>securityToken</tt> </td><td colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td
colspan="1" rowspan="1" class="confluenceTd"> A security token which must match on all
replication nodes for them to accept each others replication requests. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkAddress</tt> </td><td
colspan="1" rowsp
 an="1" class="confluenceTd"> <tt>127.0.0.1:2181</tt> </td><td colspan="1"
rowspan="1" class="confluenceTd"> A comma separated list of ZooKeeper servers. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkPassword</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td colspan="1" rowspan="1"
class="confluenceTd"> The password to use when connecting to the ZooKeeper server. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkPath</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>/default</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> The path to the ZooKeeper directory where
Master/Slave election information will be exchanged. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkSessionTmeout</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>2s</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> How quickly a node failure will be detected
by ZooKeeper. </td></tr><t
 r><td colspan="1" rowspan="1" class="confluenceTd"> <tt>sync</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>quorum_mem</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> Controls where updates are reside before
being considered complete. This setting is a comma separated list of the following options:
<tt>local_mem</tt>, <tt>local_disk</tt>, <tt>remote_mem</tt>,
<tt>remote_disk</tt>, <tt>quorum_mem</tt>, <tt>quorum_disk</tt>.
If you combine two settings for a target, the stronger guarantee is used.  For example, configuring
<tt>local_mem, local_disk</tt> is the same as just using <tt>local_disk</tt>.
 quorum_mem is the same as <tt>local_mem, remote_mem</tt> and <tt>quorum_disk</tt>
is the same as <tt>local_disk, remote_disk</tt> </td></tr></tbody></table>
+<table class="confluenceTable"><tbody><tr><th colspan="1" rowspan="1"
class="confluenceTh"> property name </th><th colspan="1" rowspan="1" class="confluenceTh">
default value </th><th colspan="1" rowspan="1" class="confluenceTh"> Comments
</th></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"> <tt>replicas</tt>
</td><td colspan="1" rowspan="1" class="confluenceTd"> <tt>3</tt>
</td><td colspan="1" rowspan="1" class="confluenceTd"> The number of nodes that
will exist in the cluster.  At least (replicas/2)+1 nodes must be online to avoid service
outage. </td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd">
<tt>securityToken</tt> </td><td colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td
colspan="1" rowspan="1" class="confluenceTd"> A security token which must match on all
replication nodes for them to accept each others replication requests. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkAddress</tt> </td><td
colspan="1" rowspan="1" cl
 ass="confluenceTd"> <tt>127.0.0.1:2181</tt> </td><td colspan="1"
rowspan="1" class="confluenceTd"> A comma separated list of ZooKeeper servers. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkPassword</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td colspan="1" rowspan="1"
class="confluenceTd"> The password to use when connecting to the ZooKeeper server. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkPath</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>/default</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> The path to the ZooKeeper directory where
Master/Slave election information will be exchanged. </td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>zkSessionTmeout</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>2s</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> How quickly a node failure will be detected
by ZooKeeper. </td></tr><tr><td col
 span="1" rowspan="1" class="confluenceTd"> <tt>sync</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> <tt>quorum_mem</tt> </td><td
colspan="1" rowspan="1" class="confluenceTd"> Controls where updates are reside before
being considered complete. This setting is a comma separated list of the following options:
<tt>local_mem</tt>, <tt>local_disk</tt>, <tt>remote_mem</tt>,
<tt>remote_disk</tt>, <tt>quorum_mem</tt>, <tt>quorum_disk</tt>.
If you combine two settings for a target, the stronger guarantee is used.  For example, configuring
<tt>local_mem, local_disk</tt> is the same as just using <tt>local_disk</tt>.
 quorum_mem is the same as <tt>local_mem, remote_mem</tt> and <tt>quorum_disk</tt>
is the same as <tt>local_disk, remote_disk</tt> </td></tr></tbody></table>
 </div>
 
 
@@ -165,8 +165,8 @@ failover:(tcp:<span class="code-comment"
 
 <h3><a shape="rect" name="Navigation-Search"></a>Search</h3>
 
-
-<div>
+<p>
+</p><div>
 <form enctype="application/x-www-form-urlencoded" method="get" action="http://www.google.com/search"
style="font-size: 10px;">
 <input type="hidden" name="ie" value="UTF-8">
 <input type="hidden" name="oe" value="UTF-8">



Mime
View raw message