activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r1002541 - in /websites/production/activemq/content: cache/main.pageCache kahadb.html
Date Fri, 09 Dec 2016 23:22:45 GMT
Author: buildbot
Date: Fri Dec  9 23:22:45 2016
New Revision: 1002541

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/kahadb.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/kahadb.html
==============================================================================
--- websites/production/activemq/content/kahadb.html (original)
+++ websites/production/activemq/content/kahadb.html Fri Dec  9 23:22:45 2016
@@ -88,11 +88,11 @@
     </persistenceAdapter>
  </broker>
 </pre>
-</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="confluenceTable"><tbody><tr><th colspan="1"
rowspan="1" class="confluenceTh"><p>property name</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>default value</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>archiveCorruptedIndex</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
corrupted indexes found at startup will be archived (not deleted).</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>archiveDataLogs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
will move a message data log t
 o the archive directory instead of deleting it.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>checkForCorruptJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
will check for corrupt journal files on startup and try and recover them.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>checkpointInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>5000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Time (ms) before check-pointing the
journal.</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>checksumJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Create a checksum for a journal file.
The presence of a checks
 um is required in order for the persistence adapter to be able to detect corrupt journal
files.</p><p>Before <strong>ActiveMQ 5.9.0</strong>: the default is
<strong><code>false</code></strong>.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>cleanupInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>30000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The interval (in ms) between consecutive
checks that determine which journal files, if any, are eligible for removal from the message
store. An eligible journal file is one that has no outstanding references.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>compactAcksAfterNoGC</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
when the acknowledgement compaction feature is enabled this value controls how 
 many store GC cycles must be completed with no other files being cleaned up before the compaction
logic is triggered to possibly compact older acknowledgements spread across journal files
into a new log file.&#160; The lower the value set the faster the compaction may occur
which can impact performance if it runs to often.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>compactAcksIgnoresStoreGrowth</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:&#160;when
the acknowledgement compaction feature is enabled this value controls whether compaction is
run when the store is still growing or if it should only occur when the store has stopped
growing (either due to idle or store limits reached).&#160; If enabled the compaction
runs regardless of the store still having room or being active which can decrease overall
performance bu
 t reclaim space faster.&#160;</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>concurrentStoreAndDispatchQueues</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Queue messages
to interested clients to happen concurrently with message storage.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>concurrentStoreAndDispatchTopics</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Topic messages
to interested clients to happen concurrently with message storage</p><div class="confluence-information-macro
confluence-information-macro-warning"><span class="aui-icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div class="confluence-information-macro-body">Enabling
this prop
 erty is not recommended.</div></div></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>directory</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>activemq-data</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The path to the directory to use
to store the message store data and log files.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>directoryArchive</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>null</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Define the directory to move data
logs to when they all the messages they contain have been consumed.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>enableAckCompaction</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
this setting controls wheth
 er the store will perform periodic compaction of older journal log files that contain only
Message acknowledgements. By compacting these older acknowledgements into new journal log
files the older files can be removed freeing space and allowing the message store to continue
to operate without hitting store size limits.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>enableIndexWriteAsync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
the index is updated asynchronously.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>enableJournalDiskSyncs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span>Ensure every journal
write is followed by a disk sync (JMS durability requirement).</span></p><div
class="co
 nfluence-information-macro confluence-information-macro-warning"><span class="aui-icon
aui-icon-small aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>This property is deprecated as of <strong>ActiveMQ</strong>
<strong>5.14.0</strong>.</p><p>From <strong>ActiveMQ</strong>
<strong>5.14.0</strong>: see <span style="color: rgb(34,34,34);"><strong><code>journalDiskSyncStrategy</code></strong>.</span></p></div></div></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><code><span>journalDiskSyncStrategy</span></code></td><td
colspan="1" rowspan="1" class="confluenceTd"><code>always</code></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
this setting configures the disk sync policy. The list of available sync strategies are (in
order of decreasing safety, and increasing performance):</p><ul><li><strong><code>always</code></strong>
<span>Ensure every journal write is follow
 ed by a disk sync (JMS durability requirement). This is the safest option but is also the
slowest because it requires a sync after every message write. This is equivalent to the deprecated
property&#160;<strong><code>enableJournalDiskSyncs=true</code></strong>.</span></li><li><strong><code>periodic</code></strong>
<span style="color: rgb(34,34,34);">The disk will be synced at set intervals (if a write
has occurred) instead of after every journal write which will reduce the load on the disk
and should improve throughput</span>. The disk will also be synced when rolling over
to a new journal file. The default interval is 1 second. The default interval offers very
good performance, whilst being safer than&#160;<strong><code>never</code></strong>
disk syncing, as data loss is limited to a maximum of 1 second's worth. See <strong><code>journalDiskSyncInterval</code></strong>
to change the frequency of disk syncs.</li><li><strong><code>never</code></strong>
A sync will never be explicitly
  called and it will be up to the operating system to flush to disk. This is equivalent to
setting the deprecated property <strong><code>enableJournalDiskSyncs=false</code></strong>.
This is the fastest option but is the least safe as there's no guarantee as to when data is
flushed to disk. Consequently message loss <em>can</em> occur on broker failure.</li></ul></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><code><span>journalDiskSyncInterval</span></code></td><td
colspan="1" rowspan="1" class="confluenceTd"><code>1000</code></td><td
colspan="1" rowspan="1" class="confluenceTd">Interval (ms) for when to perform a disk sync
when&#160;<strong><code>journalDiskSyncStrategy=periodic</code></strong>.
A sync will only be performed if a write has occurred to the journal since the last disk sync
or when the journal rolls over to a new journal file.</td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>ignoreMissingJournalfiles</code></p></td><td
colspan="1" 
 rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
reports of missing journal files are ignored.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>indexCacheSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of index pages cached in memory.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>indexDirectory</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td colspan="1" rowspan="1"
class="confluenceTd"><p><span>From <strong>ActiveMQ 5.10.0</strong>:
If set, configures where the KahaDB index files (<strong><code>db.data</code></strong>
and&#160;<strong><code>db.redo</code></strong>) will be stored.
If not set, the index files are stored in the directory specified by the&#160;<strong><code>directory</code>
 </strong> attribute.</span></p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>indexWriteBatchSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of indexes written in a batch.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>journalMaxFileLength</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>32mb</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A hint to set the maximum size of
the message data logs.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>maxAsyncJobs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The maximum number of asynchronous
messages that will be queued awaiting storage (should be the same as the number of concurrent
MessageProducers).</p></td></t
 r><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>preallocationScope</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><code>entire_journal</code></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
this setting configures how journal data files are preallocated. The default strategy preallocates
the journal file on first use using the appender thread.&#160;</p><ul><li><strong><code>entire_journal_async</code></strong>
will use preallocate ahead of time in a separate thread.</li><li><strong><code>none</code></strong>
disables preallocation.</li></ul><p>On SSD, using&#160;<strong><code>entire_journal_async</code></strong>
avoids delaying writes pending preallocation on first use.</p><p><strong>Note</strong>:
on HDD the additional thread contention for disk has a negative impact. Therefore use the
default.</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>preallocationStrategy</code></p></t
 d><td colspan="1" rowspan="1" class="confluenceTd"><p><code>sparse_file</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.12.0</strong>:&#160;This
setting configures how the broker will try to preallocate the journal files when a new journal
file is needed.</p><ul><li><strong><code>sparse_file</code></strong>
- sets the file length, but does not populate it with any data.</li><li><strong><code>os_kernel_copy</code></strong>
- delegates the preallocation to the Operating System.</li><li><strong><code>zeros</code></strong>&#160;
- each preallocated journal file contains nothing but <strong><code>0x00</code></strong>
throughout.</li></ul></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>storeOpenWireVersion</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>11</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Determines the version of OpenWire
commands that are marshaled to the KahaDB 
 journal.&#160;</p><p>Before <strong>ActiveMQ 5.12.0</strong>:
the default value is <strong><code>6</code></strong>.</p><p>Some
features of the broker depend on information stored in the OpenWire commands from newer protocol
revisions and these may not work correctly if the store version is set to a lower value.&#160;
KahaDB stores from broker versions greater than 5.9.0 will in many cases still be readable
by the broker but will cause the broker to continue using the older store version meaning
newer features may not work as intended.&#160;</p><p>For KahaDB stores that
were created in versions prior to <strong>ActiveMQ 5.9.0</strong> it will be necessary
to manually set <strong><code>storeOpenWireVersion="6"</code></strong>
in order to start a broker without error.</p></td></tr></tbody></table></div><div
class="confluence-information-macro confluence-information-macro-information"><span
class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div
c
 lass="confluence-information-macro-body">For tuning locking properties see the options
listed at <a shape="rect" href="pluggable-storage-lockers.html">Pluggable storage lockers.</a></div></div><p>&#160;</p><h3
id="KahaDB-SlowFileSystemAccessDiagnosticLogging">Slow File System Access Diagnostic Logging</h3><p>You
can configure a non zero threshold in milliseconds for database updates. If database operation
is slower than that threshold (for example if you set it to <strong><code>500</code></strong>),
you may see messages like:</p><div class="panel" style="border-width: 1px;"><div
class="panelContent">
+</div></div><h3 id="KahaDB-KahaDBProperties">KahaDB Properties</h3><div
class="table-wrap"><table class="confluenceTable"><tbody><tr><th colspan="1"
rowspan="1" class="confluenceTh"><p>Property</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Default</p></th><th colspan="1"
rowspan="1" class="confluenceTh"><p>Comments</p></th></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>archiveCorruptedIndex</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
corrupted indexes found at startup will be archived (not deleted).</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>archiveDataLogs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
will move a message data log to the archi
 ve directory instead of deleting it.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>checkForCorruptJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
will check for corrupt journal files on startup and try and recover them.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>checkpointInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>5000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Time (ms) before check-pointing the
journal.</p></td></tr><tr><td colspan="1" rowspan="1" class="confluenceTd"><p><code>checksumJournalFiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Create a checksum for a journal file.
The presence of a checksum is requi
 red in order for the persistence adapter to be able to detect corrupt journal files.</p><p>Before
<strong>ActiveMQ 5.9.0</strong>: the default is <strong><code>false</code></strong>.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>cleanupInterval</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>30000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The interval (in ms) between consecutive
checks that determine which journal files, if any, are eligible for removal from the message
store. An eligible journal file is one that has no outstanding references.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>compactAcksAfterNoGC</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
when the acknowledgement compaction feature is enabled this value controls how many store

 GC cycles must be completed with no other files being cleaned up before the compaction logic
is triggered to possibly compact older acknowledgements spread across journal files into a
new log file.&#160; The lower the value set the faster the compaction may occur which
can impact performance if it runs to often.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>compactAcksIgnoresStoreGrowth</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:&#160;when
the acknowledgement compaction feature is enabled this value controls whether compaction is
run when the store is still growing or if it should only occur when the store has stopped
growing (either due to idle or store limits reached).&#160; If enabled the compaction
runs regardless of the store still having room or being active which can decrease overall
performance but reclaim s
 pace faster.&#160;</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>concurrentStoreAndDispatchQueues</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Queue messages
to interested clients to happen concurrently with message storage.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>concurrentStoreAndDispatchTopics</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Enable the dispatching of Topic messages
to interested clients to happen concurrently with message storage</p><div class="confluence-information-macro
confluence-information-macro-warning"><span class="aui-icon aui-icon-small aui-iconfont-error
confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>Enabling
this property is 
 not recommended.</p></div></div></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>directory</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>activemq-data</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The path to the directory to use
to store the message store data and log files.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>directoryArchive</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>null</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Define the directory to move data
logs to when they all the messages they contain have been consumed.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>enableAckCompaction</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
this setting controls whether t
 he store will perform periodic compaction of older journal log files that contain only Message
acknowledgements. By compacting these older acknowledgements into new journal log files the
older files can be removed freeing space and allowing the message store to continue to operate
without hitting store size limits.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>enableIndexWriteAsync</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
the index is updated asynchronously.</p></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>enableJournalDiskSyncs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>true</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><span>Ensure every journal
write is followed by a disk sync (JMS durability requirement).</span></p><div
class="conflu
 ence-information-macro confluence-information-macro-warning"><span class="aui-icon
aui-icon-small aui-iconfont-error confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>This property is deprecated as of <strong>ActiveMQ</strong>
<strong>5.14.0</strong>.</p><p>From <strong>ActiveMQ</strong>
<strong>5.14.0</strong>: see <span style="color: rgb(34,34,34);"><strong><code>journalDiskSyncStrategy</code></strong>.</span></p></div></div></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>ignoreMissingJournalfiles</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>false</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>If <strong><code>true</code></strong>,
reports of missing journal files are ignored.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>indexCacheSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan
 ="1" rowspan="1" class="confluenceTd"><p>Number of index pages cached in memory.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>indexDirectory</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd">&#160;</td><td colspan="1" rowspan="1"
class="confluenceTd"><p><span>From <strong>ActiveMQ 5.10.0</strong>:
If set, configures where the KahaDB index files (<strong><code>db.data</code></strong>
and&#160;<strong><code>db.redo</code></strong>) will be stored.
If not set, the index files are stored in the directory specified by the&#160;<strong><code>directory</code></strong>
attribute.</span></p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>indexWriteBatchSize</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Number of indexes written in a batch.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code><span>journalD
 iskSyncInterval</span></code></p></td><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>1000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>Interval (ms) for when to perform
a disk sync when&#160;<strong><code>journalDiskSyncStrategy=periodic</code></strong>.
A sync will only be performed if a write has occurred to the journal since the last disk sync
or when the journal rolls over to a new journal file.</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code><span>journalDiskSyncStrategy</span></code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>always</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
this setting configures the disk sync policy. The list of available sync strategies are (in
order of decreasing safety, and increasing performance):</p><ul><li><p><strong><code>always</code></strong>
<span>Ensure every journal write is followed by a disk sync (JM
 S durability requirement). This is the safest option but is also the slowest because it requires
a sync after every message write. This is equivalent to the deprecated property&#160;<strong><code>enableJournalDiskSyncs=true</code></strong>.</span></p></li><li><p><strong><code>periodic</code></strong>
<span style="color: rgb(34,34,34);">The disk will be synced at set intervals (if a write
has occurred) instead of after every journal write which will reduce the load on the disk
and should improve throughput</span>. The disk will also be synced when rolling over
to a new journal file. The default interval is 1 second. The default interval offers very
good performance, whilst being safer than&#160;<strong><code>never</code></strong>
disk syncing, as data loss is limited to a maximum of 1 second's worth. See <strong><code>journalDiskSyncInterval</code></strong>
to change the frequency of disk syncs.</p></li><li><p><strong><code>never</code></strong>
A sync will never be explicitly called
  and it will be up to the operating system to flush to disk. This is equivalent to setting
the deprecated property <strong><code>enableJournalDiskSyncs=false</code></strong>.
This is the fastest option but is the least safe as there's no guarantee as to when data is
flushed to disk. Consequently message loss <em>can</em> occur on broker failure.</p></li></ul></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>journalMaxFileLength</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>32mb</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>A hint to set the maximum size of
the message data logs.</p></td></tr><tr><td colspan="1" rowspan="1"
class="confluenceTd"><p><code>maxAsyncJobs</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>10000</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>The maximum number of asynchronous
messages that will be queued awaiting storage (should be the same as the
  number of concurrent MessageProducers).</p></td></tr><tr><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>preallocationScope</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>entire_journal</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.14.0</strong>:
this setting configures how journal data files are preallocated. The default strategy preallocates
the journal file on first use using the appender thread.&#160;</p><ul><li><p><strong><code>entire_journal_async</code></strong>
will use preallocate ahead of time in a separate thread.</p></li><li><p><strong><code>none</code></strong>
disables preallocation.</p></li></ul><p>On SSD, using&#160;<strong><code>entire_journal_async</code></strong>
avoids delaying writes pending preallocation on first use.</p><p><strong>Note</strong>:
on HDD the additional thread contention for disk has a negative impact. Therefore use the
default.</p></td></tr><tr><td colspan="1" rows
 pan="1" class="confluenceTd"><p><code>preallocationStrategy</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>sparse_file</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p>From <strong>ActiveMQ 5.12.0</strong>:&#160;This
setting configures how the broker will try to preallocate the journal files when a new journal
file is needed.</p><ul><li><p><strong><code>sparse_file</code></strong>
- sets the file length, but does not populate it with any data.</p></li><li><p><strong><code>os_kernel_copy</code></strong>
- delegates the preallocation to the Operating System.</p></li><li><p><strong><code>zeros</code></strong>&#160;
- each preallocated journal file contains nothing but <strong><code>0x00</code></strong>
throughout.</p></li></ul></td></tr><tr><td colspan="1"
rowspan="1" class="confluenceTd"><p><code>storeOpenWireVersion</code></p></td><td
colspan="1" rowspan="1" class="confluenceTd"><p><code>11</code></p></td><td
colspan="1" rowspan="1" class="
 confluenceTd"><p>Determines the version of OpenWire commands that are marshaled
to the KahaDB journal.&#160;</p><p>Before <strong>ActiveMQ 5.12.0</strong>:
the default value is <strong><code>6</code></strong>.</p><p>Some
features of the broker depend on information stored in the OpenWire commands from newer protocol
revisions and these may not work correctly if the store version is set to a lower value.&#160;
KahaDB stores from broker versions greater than 5.9.0 will in many cases still be readable
by the broker but will cause the broker to continue using the older store version meaning
newer features may not work as intended.&#160;</p><p>For KahaDB stores that
were created in versions prior to <strong>ActiveMQ 5.9.0</strong> it will be necessary
to manually set <strong><code>storeOpenWireVersion="6"</code></strong>
in order to start a broker without error.</p></td></tr></tbody></table></div><div
class="confluence-information-macro confluence-information-macro-information"><span
cla
 ss="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>For tuning locking properties see the
options listed at <a shape="rect" href="pluggable-storage-lockers.html">Pluggable storage
lockers.</a></p></div></div><p>&#160;</p><h3 id="KahaDB-SlowFileSystemAccessDiagnosticLogging">Slow
File System Access Diagnostic Logging</h3><p>You can configure a non zero threshold
in milliseconds for database updates. If database operation is slower than that threshold
(for example if you set it to <strong><code>500</code></strong>),
you may see messages like:</p><div class="panel" style="border-width: 1px;"><div
class="panelContent">
 <p><code>Slow KahaDB access: cleanup took 1277 | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker</code></p>
 </div></div><p>You can configure a threshold used to log these messages
by using a system property and adjust it to your disk speed so that you can easily pick up
runtime anomalies.</p><div class="panel" style="border-width: 1px;"><div class="panelContent">
 <p><code>-Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500</code></p>
-</div></div><h1 id="KahaDB-Multi(m)kahaDBPersistenceAdapter">Multi(m) kahaDB
Persistence Adapter</h1><p>From <strong>ActiveMQ 5.6</strong>: it's
possible to distribute destinations stores across multiple kahdb persistence adapters. When
would you do this? If you have one fast producer/consumer destination and another periodic
producer destination that has irregular batch consumption then disk usage can grow out of
hand as unconsumed messages become distributed across multiple journal files. Having a separate
journal for each ensures minimal journal usage. Also, some destination may be critical and
require disk synchronization while others may not. In these cases you can use the&#160;<strong><code>mKahaDB</code></strong>
persistence adapter and filter destinations using wildcards, just like with destination policy
entries.</p><h3 id="KahaDB-Transactions">Transactions</h3><p>Transactions
can span multiple journals if the destinations are distributed. This means that two phase
completi
 on is necessary, which does impose a performance (additional disk sync) penalty to record
the commit outcome. This penalty is only imposed if more than one journal is involved in a
transaction.</p><h2 id="KahaDB-Configuration.1">Configuration</h2><p>Each
instance of&#160;<strong><code>kahaDB</code></strong> can be configured
independently. If no destination is supplied to a <strong><code>filteredKahaDB</code></strong>,
the implicit default value will match any destination, queue or topic. This is a handy catch
all. If no matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares
its wildcard matching rules with <a shape="rect" href="per-destination-policies.html">Per
Destination Policies</a>.</p><div class="code panel pdl" style="border-width:
1px;"><div class="codeContent panelContent pdl">
+</div></div><h1 id="KahaDB-Multi(m)kahaDBPersistenceAdapter">Multi(m) kahaDB
Persistence Adapter</h1><p>From <strong>ActiveMQ 5.6</strong>: it's
possible to distribute destinations stores across multiple kahdb persistence adapters. When
would you do this? If you have one fast producer/consumer destination and another periodic
producer destination that has irregular batch consumption then disk usage can grow out of
hand as unconsumed messages become distributed across multiple journal files. Having a separate
journal for each ensures minimal journal usage. Also, some destination may be critical and
require disk synchronization while others may not. In these cases you can use the&#160;<strong><code>mKahaDB</code></strong>
persistence adapter and filter destinations using wildcards, just like with destination policy
entries.</p><h3 id="KahaDB-Transactions">Transactions</h3><p>Transactions
can span multiple journals if the destinations are distributed. This means that two phase
completi
 on is necessary, which does impose a performance (additional disk sync) penalty to record
the commit outcome. This penalty is only imposed if more than one journal is involved in a
transaction.</p><h3 id="KahaDB-Configuration.1">Configuration</h3><p>Each
instance of&#160;<strong><code>kahaDB</code></strong> can be configured
independently. If no destination is supplied to a <strong><code>filteredKahaDB</code></strong>,
the implicit default value will match any destination, queue or topic. This is a handy catch
all. If no matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares
its wildcard matching rules with <a shape="rect" href="per-destination-policies.html">Per
Destination Policies</a>.</p><div class="code panel pdl" style="border-width:
1px;"><div class="codeContent panelContent pdl">
 <pre class="brush: xml; gutter: false; theme: Default" style="font-size:12px;">&lt;broker
brokerName="broker"&gt;
 
 &#160;&lt;persistenceAdapter&gt;
@@ -118,23 +118,24 @@
 &lt;/broker&gt;
 </pre>
 </div></div><h3 id="KahaDB-AutomaticPerDestinationPersistenceAdapter">Automatic
Per Destination Persistence Adapter</h3><p>Set <strong><code>perDestination="true"</code></strong>
on the catch all, i.e., when no explicit destination is set, <strong><code>filteredKahaDB</code></strong>
entry. Each matching destination will be assigned its own <strong><code>kahaDB</code></strong>
instance.</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
-<pre class="brush: xml; gutter: false; theme: Default" style="font-size:12px;">&lt;broker
brokerName="broker" ... &gt;
- &lt;persistenceAdapter&gt;
+<pre class="brush: xml; gutter: false; theme: Default" style="font-size:12px;">&lt;broker
brokerName="broker"&gt;
+
+&#160;&lt;persistenceAdapter&gt;
   &lt;mKahaDB directory="${activemq.base}/data/kahadb"&gt;
     &lt;filteredPersistenceAdapters&gt;
       &lt;!-- kahaDB per destinations --&gt;
-      &lt;filteredKahaDB perDestination="true" &gt;
+      &lt;filteredKahaDB perDestination="true"&gt;
         &lt;persistenceAdapter&gt;
-          &lt;kahaDB journalMaxFileLength="32mb" /&gt;
+          &lt;kahaDB journalMaxFileLength="32mb"/&gt;
         &lt;/persistenceAdapter&gt;
       &lt;/filteredKahaDB&gt;
     &lt;/filteredPersistenceAdapters&gt;
   &lt;/mKahaDB&gt;
  &lt;/persistenceAdapter&gt;
-...
+
 &lt;/broker&gt;
 </pre>
-</div></div><div class="confluence-information-macro confluence-information-macro-information"><p
class="title">Note:</p><span class="aui-icon aui-icon-small aui-iconfont-info
confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>Specifying
both <strong><code>perDestination="true"</code></strong> <em>and</em>&#160;<strong><code>queue="&gt;"</code></strong>
on the same&#160;<strong><code>filteredKahaDB</code></strong>
entry has not been tested. It <em> may</em> result in:</p><p>&#160;</p><pre>Reason:
java.io.IOException: File '/opt/java/apache-activemq-5.9.0/data/mKahaDB/lock' could not be
locked as lock is already held for this jvm. </pre></div></div></div>
+</div></div><div class="confluence-information-macro confluence-information-macro-information"><span
class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div
class="confluence-information-macro-body"><p>Specifying both <strong><code>perDestination="true"</code></strong>
<em>and</em>&#160;<strong><code>queue="&gt;"</code></strong>
on the same&#160;<strong><code>filteredKahaDB</code></strong>
entry has not been tested. It <em> may</em> result in the following exception
being raised:</p><p><code>Reason: java.io.IOException: File '/opt/java/apache-activemq-5.9.0/data/mKahaDB/lock'
could not be locked as lock is already held for this jvm</code></p></div></div></div>
         </td>
         <td valign="top">
           <div class="navigation">




Mime
View raw message