activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r935180 - in /websites/production/activemq/content: cache/main.pageCache replicated-leveldb-store.html
Date Mon, 05 Jan 2015 21:21:40 GMT
Author: buildbot
Date: Mon Jan  5 21:21:40 2015
New Revision: 935180

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/replicated-leveldb-store.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/replicated-leveldb-store.html
==============================================================================
--- websites/production/activemq/content/replicated-leveldb-store.html (original)
+++ websites/production/activemq/content/replicated-leveldb-store.html Mon Jan  5 21:21:40
2015
@@ -81,54 +81,18 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><h2 id="ReplicatedLevelDBStore-Synopsis">Synopsis</h2>
-
-<p>The Replicated LevelDB Store uses Apache ZooKeeper to pick a master from a set of
broker nodes configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB
Stores with the master keeps them up to date by replicating all updates from the master.</p>
-
-<p>The Replicated LevelDB Store uses the same data files as a LevelDB Store, so you
can switch a broker configuration between replicated and non replicated whenever you want.</p>
-
-
-    <div class="aui-message hint shadowed information-macro">
+<div class="wiki-content maincontent"><h2 id="ReplicatedLevelDBStore-Synopsis">Synopsis</h2><p>The
Replicated LevelDB Store uses Apache ZooKeeper to pick a master from a set of broker nodes
configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB Stores with the
master keeps them up to date by replicating all updates from the master.</p><p>The
Replicated LevelDB Store uses the same data files as a LevelDB Store, so you can switch a
broker configuration between replicated and non replicated whenever you want.</p>  
 <div class="aui-message hint shadowed information-macro">
                     <p class="title">Version Compatibility</p>
                             <span class="aui-icon icon-hint">Icon</span>
                 <div class="message-content">
-                            
-<p>Available as of ActiveMQ 5.9.0.</p>
+                            <p>Available as of ActiveMQ 5.9.0.</p>
                     </div>
     </div>
- 
-
-<h2 id="ReplicatedLevelDBStore-Howitworks.">How it works.</h2>
-
-<p><img class="confluence-embedded-image" src="https://cwiki.apache.org/confluence/download/attachments/31820167/replicated-leveldb-store.png?version=1&amp;modificationDate=1367958504000&amp;api=v2"
data-image-src="/confluence/download/attachments/31820167/replicated-leveldb-store.png?version=1&amp;modificationDate=1367958504000&amp;api=v2"></p>
-
-<p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache
ZooKeeper</a> to coordinate which node in the cluster becomes the master.  The elected
master broker node starts and accepts client connections. The other nodes go into slave mode
and connect the the master and synchronize their persistent state /w it.  The slave nodes
do not accept client connections.  All persistent operations are replicated to the connected
slaves.  If the master dies, the slaves with the latest update gets promoted to become the
master.  The failed node can then be brought back online and it will go into slave mode.</p>
-
-<p>All messaging operations which require a sync to disk will wait for the update to
be replicated to a quorum of the nodes before completing.  So if you configure the store with
<code>replicas="3"</code> then the quorum size is <code>(3/2+1)=2</code>.
 The master will store the update locally and wait for 1 other slave to store the update before
reporting success.  Another way to think about it is that store will do synchronous replication
to a quorum of the replication nodes and asynchronous replication replication to any additional
nodes.</p>
-
-<p>When a new master is elected, you also need at least a quorum of nodes online to
be able to find a node with the lastest updates.  The node with the lastest updates will become
the new master.  Therefore, it's recommend that you run with at least 3 replica nodes so that
you can take one down without suffering a service outage.</p>
-
-<h3 id="ReplicatedLevelDBStore-DeploymentTips">Deployment Tips</h3>
-
-<p>Clients should be using the <a shape="rect" href="failover-transport-reference.html">Failover
Transport</a> to connect to the broker nodes in the replication cluster. e.g. using
a URL something like the following:</p>
-
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
-<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
-failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
+<h2 id="ReplicatedLevelDBStore-Howitworks.">How it works.</h2><p><img
class="confluence-embedded-image" src="replicated-leveldb-store.data/replicated-leveldb-store.png"
data-image-src="/confluence/download/attachments/31820167/replicated-leveldb-store.png?version=1&amp;modificationDate=1367958504000&amp;api=v2"></p><p>It
uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache
ZooKeeper</a> to coordinate which node in the cluster becomes the master. The elected
master broker node starts and accepts client connections. The other nodes go into slave mode
and connect the the master and synchronize their persistent state /w it. The slave nodes do
not accept client connections. All persistent operations are replicated to the connected slaves.
If the master dies, the slaves with the latest update gets promoted to become the master.
The failed node can then be brought back online and it will go into slave mode.</p><p>All
messaging operations which require a syn
 c to disk will wait for the update to be replicated to a quorum of the nodes before completing.
So if you configure the store with <code>replicas="3"</code> then the quorum size
is <code>(3/2+1)=2</code>. The master will store the update locally and wait for
1 other slave to store the update before reporting success. Another way to think about it
is that store will do synchronous replication to a quorum of the replication nodes and asynchronous
replication replication to any additional nodes.</p><p>When a new master is elected,
you also need at least a quorum of nodes online to be able to find a node with the lastest
updates. The node with the lastest updates will become the new master. Therefore, it's recommend
that you run with at least 3 replica nodes so that you can take one down without suffering
a service outage.</p><h3 id="ReplicatedLevelDBStore-DeploymentTips">Deployment
Tips</h3><p>Clients should be using the <a shape="rect" href="failover-transport-reference.html">Failover
  Transport</a> to connect to the broker nodes in the replication cluster. e.g. using
a URL something like the following:</p><div class="code panel pdl" style="border-width:
1px;"><div class="codeContent panelContent pdl">
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
 ]]></script>
-</div></div>
-
-<p>You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is
highly available. Don't overcommit your ZooKeeper servers.  An overworked ZooKeeper might
start thinking live replication nodes have gone offline due to delays in processing their
'keep-alive' messages.</p>
-
-<p>For best results, make sure you explicitly configure the hostname attribute with
a hostname or ip address for the node that other cluster members to access the machine with.
 The automatically determined hostname is not always accessible by the other cluster members
and results in slaves not being able to establish a replication session with the master.</p>
-
-<h2 id="ReplicatedLevelDBStore-Configuration">Configuration</h2>
-
-<p>You can configure ActiveMQ to use LevelDB for its persistence adapter - like below
:</p>
-
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
-<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
-  &lt;broker brokerName=&quot;broker&quot; ... &gt;
+</div></div><p>You should run at least 3 ZooKeeper server nodes so that
the ZooKeeper service is highly available. Don't overcommit your ZooKeeper servers. An overworked
ZooKeeper might start thinking live replication nodes have gone offline due to delays in processing
their 'keep-alive' messages.</p><p>For best results, make sure you explicitly
configure the hostname attribute with a hostname or ip address for the node that other cluster
members to access the machine with. The automatically determined hostname is not always accessible
by the other cluster members and results in slaves not being able to establish a replication
session with the master.</p><h2 id="ReplicatedLevelDBStore-Configuration">Configuration</h2><p>You
can configure ActiveMQ to use LevelDB for its persistence adapter - like below :</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
 &lt;broker brokerName=&quot;broker&quot; ... &gt;
     ...
     &lt;persistenceAdapter&gt;
       &lt;replicatedLevelDB
@@ -144,37 +108,11 @@ failover:(tcp://broker1:61616,tcp://brok
     ...
   &lt;/broker&gt;
 ]]></script>
-</div></div>
-
-<h3 id="ReplicatedLevelDBStore-ReplicatedLevelDBStoreProperties">Replicated LevelDB
Store Properties</h3>
-
-<p>All the broker nodes that are part of the same replication set should have matching
<code>brokerName</code> XML attributes. The following configuration properties
should be the same on all the broker nodes that are part of the same replication set:</p>
-
-<div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p> property name </p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p> default value </p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p> Comments </p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>replicas</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>3</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
number of nodes that will exist in the cluster.  At least (replicas/2)+1 nodes must be online
to avoid service outage. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p> <code>securityToken</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>&#160;</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> A security token which must match
on all replication nodes for them to accept each others replication requests. </p></td></tr><tr><td
c
 olspan="1" rowspan="1" class="confluenceTd"><p> <code>zkAddress</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>127.0.0.1:2181</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> A comma
separated list of ZooKeeper servers. </p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p> <code>zkPassword</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>&#160;</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> The password to use when connecting
to the ZooKeeper server. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p> <code>zkPath</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>/default</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
path to the ZooKeeper directory where Master/Slave election information will be exchanged.
</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>
<code>zkSessionTmeout</code> </p></td><td
  colspan="1" rowspan="1" class="confluenceTd"><p> <code>2s</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> How quickly a node failure will
be detected by ZooKeeper. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p> <code>sync</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>quorum_mem</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> Controls
where updates are reside before being considered complete. This setting is a comma separated
list of the following options: <code>local_mem</code>, <code>local_disk</code>,
<code>remote_mem</code>, <code>remote_disk</code>, <code>quorum_mem</code>,
<code>quorum_disk</code>. If you combine two settings for a target, the stronger
guarantee is used.  For example, configuring <code>local_mem, local_disk</code>
is the same as just using <code>local_disk</code>.  quorum_mem is the same as
<code>local_mem, remote_mem</code> and <code>quorum_disk</code> is
the same a
 s <code>local_disk, remote_disk</code> </p></td></tr></tbody></table></div>
-
-
-
-<p>Different replication sets can share the same <code>zkPath</code> as
long they have different <code>brokerName</code>.</p>
-
-<p>The following configuration properties can be unique per node:</p>
-
-<div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p> property name </p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p> default value </p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p> Comments </p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>bind</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>tcp://0.0.0.0:61619</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> When
this node becomes a master, it will bind the configured address and port to service the replication
protocol.  Using dynamic ports is also supported.  Just configure with <code>tcp://0.0.0.0:0</code>
</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>
<code>hostname</code> </p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>&#160;</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> The host name used to advertise
the replicat
 ion service when this node becomes the master.  If not set it will be automatically determined.
</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>
<code>weight</code> </p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>
1 </p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
replication node that has the latest update with the highest weight will become the master.
 Used to give preference to some nodes towards becoming master. </p></td></tr></tbody></table></div>
-
-
-<p>The store also supports the same configuration properties of a standard <a shape="rect"
href="leveldb-store.html">LevelDB Store</a> but it does not support the pluggable
storage lockers :</p>
-
-<h3 id="ReplicatedLevelDBStore-StandardLevelDBStoreProperties">Standard LevelDB Store
Properties</h3>
-
-<div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p> property name </p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p> default value </p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p> Comments </p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>directory</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>LevelDB</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
directory which the store will use to hold it's data files. The store will create the directory
if it does not already exist. </p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p> <code>readThreads</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>10</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> The number of concurrent IO read
threads to allowed. </p></td></tr><tr><td colspan="1" rowspan="1"
class="conflu
 enceTd"><p> <code>logSize</code> </p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p> <code>104857600</code> (100 MB)
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
max size (in bytes) of each data log file before log file rotation occurs. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>logWriteBufferSize</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>4194304</code>
(4 MB) </p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>
That maximum amount of log data to build up before writing to the file system. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>verifyChecksums</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>false</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> Set
to true to force checksum verification of all data that is read from the file system. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenc
 eTd"><p> <code>paranoidChecks</code> </p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p> <code>false</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> Make the store error out as soon
as possible if it detects internal corruption. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>indexFactory</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>org.fusesource.leveldbjni.JniDBFactory,
org.iq80.leveldb.impl.Iq80DBFactory</code> </p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p> The factory classes to use when creating the LevelDB indexes
</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>
<code>indexMaxOpenFiles</code> </p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p> <code>1000</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> Number of open files that can be
used by the index. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p>
  <code>indexBlockRestartInterval</code> </p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p> <code>16</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> Number keys between restart points
for delta encoding of keys. </p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p> <code>indexWriteBufferSize</code> </p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>6291456</code>
(6 MB) </p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>
Amount of index data to build up in memory before converting to a sorted on-disk file. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>indexBlockSize</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>4096</code>
(4 K) </p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>
The size of index data packed per block. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>indexCacheSize</code>
</p></td><td colspan="1
 " rowspan="1" class="confluenceTd"><p> <code>268435456</code> (256 MB)
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
maximum amount of off-heap memory to use to cache index blocks. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>indexCompression</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>snappy</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
type of compression to apply to the index blocks.  Can be snappy or none. </p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p> <code>logCompression</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> <code>none</code>
</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p> The
type of compression to apply to the log records. Can be snappy or none. </p></td></tr></tbody></table></div>
-
-
-
-    <div class="aui-message problem shadowed information-macro">
+</div></div><h3 id="ReplicatedLevelDBStore-ReplicatedLevelDBStoreProperties">Replicated
LevelDB Store Properties</h3><p>All the broker nodes that are part of the same
replication set should have matching <code>brokerName</code> XML attributes. The
following configuration properties should be the same on all the broker nodes that are part
of the same replication set:</p><div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p>property name</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>default value</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>replicas</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>3</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The number of nodes that will exist
in the cluster. At least (replicas/2)+1 nodes must be online to avoid service outage.</p
 ></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>securityToken</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>&#160;</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A security token which must match
on all replication nodes for them to accept each others replication requests.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>zkAddress</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>127.0.0.1:2181</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A comma separated list of ZooKeeper
servers.</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>zkPassword</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>&#160;</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The password to use when connecting
to the ZooKeeper server.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>zkPath</code></p></td><
 td colspan="1" rowspan="1" class="confluenceTd"><p><code>/default</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The path to the ZooKeeper directory
where Master/Slave election information will be exchanged.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>zkSessionTimeout</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>2s</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>How quickly a node failure will be
detected by ZooKeeper. (prior to 5.11 - this had a typo <span>zkSessionTmeout)</span></p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>sync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>quorum_mem</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Controls where updates are reside
before being considered complete. This setting is a comma separated list of the following
options: <code>local_mem</code>, <code>local_disk</code>, 
 <code>remote_mem</code>, <code>remote_disk</code>, <code>quorum_mem</code>,
<code>quorum_disk</code>. If you combine two settings for a target, the stronger
guarantee is used. For example, configuring <code>local_mem, local_disk</code>
is the same as just using <code>local_disk</code>. quorum_mem is the same as <code>local_mem,
remote_mem</code> and <code>quorum_disk</code> is the same as <code>local_disk,
remote_disk</code></p></td></tr></tbody></table></div><p>Different
replication sets can share the same <code>zkPath</code> as long they have different
<code>brokerName</code>.</p><p>The following configuration properties
can be unique per node:</p><div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p>property name</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>default value</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd
 "><p><code>bind</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>tcp://0.0.0.0:61619</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>When this node becomes a master,
it will bind the configured address and port to service the replication protocol. Using dynamic
ports is also supported. Just configure with <code>tcp://0.0.0.0:0</code></p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>hostname</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>&#160;</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The host name used to advertise the
replication service when this node becomes the master. If not set it will be automatically
determined.</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>weight</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>1</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>The replication node that has the latest update
wi
 th the highest weight will become the master. Used to give preference to some nodes towards
becoming master.</p></td></tr></tbody></table></div><p>The
store also supports the same configuration properties of a standard <a shape="rect" href="leveldb-store.html">LevelDB
Store</a> but it does not support the pluggable storage lockers :</p><h3 id="ReplicatedLevelDBStore-StandardLevelDBStoreProperties">Standard
LevelDB Store Properties</h3><div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p>property name</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>default value</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>directory</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>LevelDB</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The directory which the store will
use to hold it'
 s data files. The store will create the directory if it does not already exist.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>readThreads</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The number of concurrent IO read
threads to allowed.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>logSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>104857600</code>
(100 MB)</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>The
max size (in bytes) of each data log file before log file rotation occurs.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>logWriteBufferSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>4194304</code>
(4 MB)</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>That
maximum amount of log data to build up before writing to th
 e file system.</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>verifyChecksums</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Set to true to force checksum verification
of all data that is read from the file system.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>paranoidChecks</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Make the store error out as soon
as possible if it detects internal corruption.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>indexFactory</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>org.fusesource.leveldbjni.JniDBFactory,
org.iq80.leveldb.impl.Iq80DBFactory</code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p>The factory classes to use when cre
 ating the LevelDB indexes</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexMaxOpenFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of open files that can be
used by the index.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexBlockRestartInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>16</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number keys between restart points
for delta encoding of keys.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexWriteBufferSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>6291456</code>
(6 MB)</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>Amount
of index data to build up in memory before converting to a sorted on-disk file.</p></td></tr><tr><td
colspan="1" row
 span="1" class="confluenceTd"><p><code>indexBlockSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>4096</code> (4
K)</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>The
size of index data packed per block.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>indexCacheSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>268435456</code>
(256 MB)</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>The
maximum amount of off-heap memory to use to cache index blocks.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>indexCompression</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>snappy</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The type of compression to apply
to the index blocks. Can be snappy or none.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>logCompression</code></p></td><td
colspan="
 1" rowspan="1" class="confluenceTd"><p><code>none</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The type of compression to apply
to the log records. Can be snappy or none.</p></td></tr></tbody></table></div>
   <div class="aui-message problem shadowed information-macro">
                     <p class="title">Caveats</p>
                             <span class="aui-icon icon-problem">Icon</span>
                 <div class="message-content">
-                            
-<p>The LevelDB store does not yet support storing data associated with <a shape="rect"
href="delay-and-schedule-message-delivery.html">Delay and Schedule Message Delivery</a>.
 Those are are stored in a separate non-replicated KahaDB data files.  Unexpected results
will occur if you use <a shape="rect" href="delay-and-schedule-message-delivery.html">Delay
and Schedule Message Delivery</a> with the replicated leveldb store since that data
will be not be there when the master fails over to a slave.</p>
+                            <p>The LevelDB store does not yet support storing data
associated with <a shape="rect" href="delay-and-schedule-message-delivery.html">Delay
and Schedule Message Delivery</a>. Those are are stored in a separate non-replicated
KahaDB data files. Unexpected results will occur if you use <a shape="rect" href="delay-and-schedule-message-delivery.html">Delay
and Schedule Message Delivery</a> with the replicated leveldb store since that data
will be not be there when the master fails over to a slave.</p>
                     </div>
     </div></div>
         </td>



Mime
View raw message