activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r898778 - in /websites/production/activemq/content: cache/main.pageCache kahadb.html
Date Fri, 21 Feb 2014 16:21:39 GMT
Author: buildbot
Date: Fri Feb 21 16:21:39 2014
New Revision: 898778

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/kahadb.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/kahadb.html
==============================================================================
--- websites/production/activemq/content/kahadb.html (original)
+++ websites/production/activemq/content/kahadb.html Fri Feb 21 16:21:39 2014
@@ -81,15 +81,8 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><p>KahaDB is a file based persistence database
that is local to the message broker that is using it. It has been optimised for fast persistence
and is the the default storage mechanism from ActiveMQ 5.4 onwards. KahaDB uses less file
descriptors and provides faster recovery than its predecessor, the <a shape="rect" href="amq-message-store.html">amq
message store</a>.</p>
-
-<h2 id="KahaDB-Configuration">Configuration</h2>
-
-<p>You can configure ActiveMQ to use KahaDB for its persistence adapter - like below:</p>
-
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
-<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
- 
+<div class="wiki-content maincontent"><p>KahaDB is a file based persistence database
that is local to the message broker that is using it. It has been optimised for fast persistence
and is the the default storage mechanism from ActiveMQ 5.4 onwards. KahaDB uses less file
descriptors and provides faster recovery than its predecessor, the <a shape="rect" href="amq-message-store.html">AMQ
Message Store</a>.</p><h2 id="KahaDB-Configuration">Configuration</h2><p>You
can configure ActiveMQ to use KahaDB for its persistence adapter - like below:</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[

  &lt;broker brokerName=&quot;broker&quot; ... &gt;
     &lt;persistenceAdapter&gt;
       &lt;kahaDB directory=&quot;activemq-data&quot; journalMaxFileLength=&quot;32mb&quot;/&gt;
@@ -98,42 +91,18 @@
  &lt;/broker&gt;
 
 ]]></script>
-</div></div>
-
-<h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3>
-
-<div class="table-wrap"><table class="confluenceTable"><tbody><tr><th
colspan="1" rowspan="1" class="confluenceTh"><p>property name</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>default value</p></th><th
colspan="1" rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>directory</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>activemq-data</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>the path to the directory to use
to store the message store data and log files</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>indexWriteBatchSize</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>1000</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>number of indexes written in a batch</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>indexCacheSize</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>10000</p></td><td
colspan="1" rowsp
 an="1" class="confluenceTd"><p>number of index pages cached in memory</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>enableIndexWriteAsync</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>if set, will asynchronously write
indexes</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>journalMaxFileLength</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>32mb</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>a hint to set the maximum size of the message
data logs</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>enableJournalDiskSyncs</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>true</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>ensure every non transactional journal write
is followed by a disk sync (JMS durability requirement)</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>cleanupInt
 erval</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>30000</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>time (ms) before checking for a discarding/moving
message data logs that are no longer used</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>checkpointInterval</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>5000</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>time (ms) before checkpointing the journal</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>ignoreMissingJournalfiles</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will ignore a missing
message log file</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>checkForCorruptJournalFiles</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will check
  for corrupted Journal files on startup and try and recover them</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>checksumJournalFiles</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span style="text-decoration:
line-through;">false</span> true <sub>v5.9</sub></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>create a checksum for a journal file
- to enable checking for corrupted journals</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>archiveDataLogs</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will move a message data
log to the archive directory instead of deleting it.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>directoryArchive</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>null</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Define the directory to move data logs to when
the
 y all the messages they contain have been consumed.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>maxAsyncJobs</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>10000</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>the maximum number of asynchronous
messages that will be queued awaiting storage (should be the same as the number of concurrent
MessageProducers)</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p>concurrentStoreAndDispatchTopics</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>enable the dispatching of Topic messages
to interested clients to happen concurrently with message storage</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>concurrentStoreAndDispatchQueues</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>true</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>enable the dispatching of Qu
 eue messages to interested clients to happen concurrently with message storage</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>archiveCorruptedIndex</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, corrupted indexes found
at startup will be archived (not deleted)</p></td></tr></tbody></table></div>
-
-
-<p>For tuning locking properties please take a look at <a shape="rect" href="pluggable-storage-lockers.html">Pluggable
storage lockers</a></p>
-
-<h3 id="KahaDB-Slowfilesystemaccessdiagnosticlogging">Slow file system access diagnostic
logging </h3>
-
-<p>You can configure a non zero threshold in mili seconds for database updates.<br
clear="none">
-If database operation is slower than that threshold (for example if you set it to 500), you
may see messages like </p>
-
-<div class="panel" style="border-width: 1px;"><div class="panelContent">
+</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="confluenceTable"><tbody><tr><th colspan="1"
rowspan="1" class="confluenceTh"><p>property name</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>default value</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>directory</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>activemq-data</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>the path to the directory to use
to store the message store data and log files</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd">IndexDirectory</td><td colspan="1"
rowspan="1" class="confluenceTd">&#160;</td><td colspan="1" rowspan="1" class="confluenceTd"><p><span>If
set, configures where the KahaDB index files will be stored. If not set, the index files are
stored in the directory specified by the 'directory' attribute. 
 </span></p>    <div class="aui-message warning shadowed information-macro">
+                            <span class="aui-icon icon-warning">Icon</span>
+                <div class="message-content">
+                            Available as of ActiveMQ 5.10
+                    </div>
+    </div>
+</td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>indexWriteBatchSize</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>1000</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>number of indexes written in a batch</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>indexCacheSize</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>10000</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>number of index pages cached in memory</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>enableIndexWriteAsync</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>if set, will asynchronously write
indexes</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>journalMaxFileLength</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>32mb</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>a hint to set the maximum 
 size of the message data logs</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p>enableJournalDiskSyncs</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>true</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>ensure every non transactional journal write
is followed by a disk sync (JMS durability requirement)</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>cleanupInterval</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>30000</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>time (ms) before checking for a discarding/moving
message data logs that are no longer used</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>checkpointInterval</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>5000</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>time (ms) before checkpointing the journal</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>ignoreMissingJournalf
 iles</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will ignore a missing
message log file</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>checkForCorruptJournalFiles</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, will check for corrupted
Journal files on startup and try and recover them</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>checksumJournalFiles</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span style="text-decoration:
line-through;">false</span> true <sub>v5.9</sub></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>create a checksum for a journal file
- to enable checking for corrupted journals</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>archiveDataLogs</p></td><td
colspan="1" rowspan="1" class="conflue
 nceTd"><p>false</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>If
enabled, will move a message data log to the archive directory instead of deleting it.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>directoryArchive</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>null</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>Define the directory to move data logs to when
they all the messages they contain have been consumed.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>maxAsyncJobs</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>10000</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>the maximum number of asynchronous
messages that will be queued awaiting storage (should be the same as the number of concurrent
MessageProducers)</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p>concurrentStoreAndDispatchTopics</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>f
 alse</p></td><td colspan="1" rowspan="1" class="confluenceTd"><p>enable
the dispatching of Topic messages to interested clients to happen concurrently with message
storage</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p>concurrentStoreAndDispatchQueues</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>true</p></td><td colspan="1"
rowspan="1" class="confluenceTd"><p>enable the dispatching of Queue messages to interested
clients to happen concurrently with message storage</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p>archiveCorruptedIndex</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>false</p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If enabled, corrupted indexes found
at startup will be archived (not deleted)</p></td></tr></tbody></table></div><p>For
tuning locking properties please take a look at <a shape="rect" href="pluggable-storage-lockers.html">Pluggable
storage lockers</a></p><h3 id="KahaDB-Slow
 filesystemaccessdiagnosticlogging">Slow file system access diagnostic logging</h3><p>You
can configure a non zero threshold in mili seconds for database updates.<br clear="none">
If database operation is slower than that threshold (for example if you set it to 500), you
may see messages like</p><div class="panel" style="border-width: 1px;"><div
class="panelContent">
 <p>Slow KahaDB access: cleanup took 1277 | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker</p>
-</div></div>
-
-<p>You can configure a threshold used to log these messages by using a system property
and adjust it to your disk speed so that you can easily pick up runtime anomalies.</p>
-
-<div class="panel" style="border-width: 1px;"><div class="panelContent">
+</div></div><p>You can configure a threshold used to log these messages
by using a system property and adjust it to your disk speed so that you can easily pick up
runtime anomalies.</p><div class="panel" style="border-width: 1px;"><div class="panelContent">
 <p>-Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500</p>
-</div></div>
-
-<h1 id="KahaDB-Multi(m)kahaDBpersistenceadapter">Multi(m) kahaDB persistence adapter</h1>
-<p>From 5.6, it is possible to distribute destinations stores across multiple kahdb
persistence adapters. When would you do this? If you have one fast producer/consumer destination
and another periodic producer destination that has irregular batch consumption, you disk usage
can grow out of hand because unconsumed messages get dotted across journal files. Having a
separate journal for each ensures minimal journal usage. Also, some destination may be critical
and require disk synchronisation while others may not.<br clear="none">
-In these cases you can use the mKahaDB persistence adapter and filter destinations using
wildcards, just like with destination policy entries.</p>
-
-<h3 id="KahaDB-Transactions">Transactions</h3>
-<p>Transactions can span multiple journals if the destinations are distributed. This
means that two phase completion is necessary, which does impose a performance (additional
disk sync) penalty to record the commit outcome. This penalty is only imposed if more than
one journal is involved in a transaction. </p>
-
-<h2 id="KahaDB-Configuration.1">Configuration</h2>
-<p>Each instance of kahaDB can be configured independently. If no destination is supplied
to a <code>filteredKahaDB</code>, the implicit default value will match any destination,
queue or topic. This is a handy catch all. If no matching persistence adapter can be found,
destination creation will fail with an exception. The <code>filteredKahaDB</code>
shares its wildcard matching rules with <a shape="rect" href="per-destination-policies.html">per
destination policies</a>.</p>
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
-<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
-&lt;broker brokerName=&quot;broker&quot; ... &gt;
+</div></div><h1 id="KahaDB-Multi(m)kahaDBpersistenceadapter">Multi(m) kahaDB
persistence adapter</h1><p>From 5.6, it is possible to distribute destinations
stores across multiple kahdb persistence adapters. When would you do this? If you have one
fast producer/consumer destination and another periodic producer destination that has irregular
batch consumption, you disk usage can grow out of hand because unconsumed messages get dotted
across journal files. Having a separate journal for each ensures minimal journal usage. Also,
some destination may be critical and require disk synchronisation while others may not.<br
clear="none"> In these cases you can use the mKahaDB persistence adapter and filter destinations
using wildcards, just like with destination policy entries.</p><h3 id="KahaDB-Transactions">Transactions</h3><p>Transactions
can span multiple journals if the destinations are distributed. This means that two phase
completion is necessary, which does impose a performance (addit
 ional disk sync) penalty to record the commit outcome. This penalty is only imposed if more
than one journal is involved in a transaction.</p><h2 id="KahaDB-Configuration.1">Configuration</h2><p>Each
instance of kahaDB can be configured independently. If no destination is supplied to a <code>filteredKahaDB</code>,
the implicit default value will match any destination, queue or topic. This is a handy catch
all. If no matching persistence adapter can be found, destination creation will fail with
an exception. The <code>filteredKahaDB</code> shares its wildcard matching rules
with <a shape="rect" href="per-destination-policies.html">Per Destination Policies</a>.</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[&lt;broker
brokerName=&quot;broker&quot; ... &gt;
  &lt;persistenceAdapter&gt;
   &lt;mKahaDB directory=&quot;${activemq.base}/data/kahadb&quot;&gt;
     &lt;filteredPersistenceAdapters&gt;
@@ -156,13 +125,8 @@ In these cases you can use the mKahaDB p
 ...
 &lt;/broker&gt;
 ]]></script>
-</div></div>
-
-<h3 id="KahaDB-Automaticperdestinationpersistenceadapter">Automatic per destination
persistence adapter</h3>
-<p>When the <code>perDestination</code> boolean attribute is set to true
on the catch all (no explicit destination set), <code>filteredKahaDB</code>. Each
matching destination will get its own <code>kahaDB</code> instance.</p>
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
-<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
-&lt;broker brokerName=&quot;broker&quot; ... &gt;
+</div></div><h3 id="KahaDB-Automaticperdestinationpersistenceadapter">Automatic
per destination persistence adapter</h3><p>When the <code>perDestination</code>
boolean attribute is set to true on the catch all (no explicit destination set), <code>filteredKahaDB</code>.
Each matching destination will get its own <code>kahaDB</code> instance.</p><div
class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
+<script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[&lt;broker
brokerName=&quot;broker&quot; ... &gt;
  &lt;persistenceAdapter&gt;
   &lt;mKahaDB directory=&quot;${activemq.base}/data/kahadb&quot;&gt;
     &lt;filteredPersistenceAdapters&gt;



Mime
View raw message