activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r1003180 - in /websites/production/activemq/content: cache/main.pageCache kahadb.html
Date Tue, 20 Dec 2016 15:22:46 GMT
Author: buildbot
Date: Tue Dec 20 15:22:46 2016
New Revision: 1003180

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/kahadb.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/kahadb.html
==============================================================================
--- websites/production/activemq/content/kahadb.html (original)
+++ websites/production/activemq/content/kahadb.html Tue Dec 20 15:22:46 2016
@@ -92,7 +92,7 @@
 <p><code>Slow KahaDB access: cleanup took 1277 | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker</code></p>
 </div></div><p>You can configure a threshold used to log these messages
by using a system property and adjust it to your disk speed so that you can easily pick up
runtime anomalies.</p><div class="panel" style="border-width: 1px;"><div class="panelContent">
 <p><code>-Dorg.apache.activemq.store.kahadb.LOG_SLOW_ACCESS_TIME=1500</code></p>
-</div></div><h1 id="KahaDB-Multi(m)kahaDBPersistenceAdapter">Multi(m) kahaDB
Persistence Adapter</h1><p>From <strong>ActiveMQ 5.6</strong>: it's
possible to distribute destinations stores across multiple kahdb persistence adapters. When
would you do this? If you have one fast producer/consumer destination and another periodic
producer destination that has irregular batch consumption then disk usage can grow out of
hand as unconsumed messages become distributed across multiple journal files. Having a separate
journal for each ensures minimal journal usage. Also, some destination may be critical and
require disk synchronization while others may not. In these cases you can use the&#160;<strong><code>mKahaDB</code></strong>
persistence adapter and filter destinations using wildcards, just like with destination policy
entries.</p><h3 id="KahaDB-Transactions">Transactions</h3><p>Transactions
can span multiple journals if the destinations are distributed. This means that two phase
completi
 on is necessary, which does impose a performance (additional disk sync) penalty to record
the commit outcome. This penalty is only imposed if more than one journal is involved in a
transaction.</p><h3 id="KahaDB-Configuration.1">Configuration</h3><p>Each
instance of&#160;<strong><code>kahaDB</code></strong> can be configured
independently. If no destination is supplied to a <strong><code>filteredKahaDB</code></strong>,
the implicit default value will match any destination, queue or topic. This is a handy catch
all. If no matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares
its wildcard matching rules with <a shape="rect" href="per-destination-policies.html">Per
Destination Policies</a>.</p><div class="code panel pdl" style="border-width:
1px;"><div class="codeContent panelContent pdl">
+</div></div><h1 id="KahaDB-Multi(m)kahaDBPersistenceAdapter">Multi(m) kahaDB
Persistence Adapter</h1><p>From <strong>ActiveMQ 5.6</strong>: it's
possible to distribute destinations stores across multiple kahdb persistence adapters. When
would you do this? If you have one fast producer/consumer destination and another periodic
producer destination that has irregular batch consumption then disk usage can grow out of
hand as unconsumed messages become distributed across multiple journal files. Having a separate
journal for each ensures minimal journal usage. Also, some destination may be critical and
require disk synchronization while others may not. In these cases you can use the&#160;<strong><code>mKahaDB</code></strong>
persistence adapter and filter destinations using wildcards, just like with destination policy
entries.</p><h3 id="KahaDB-Transactions">Transactions</h3><p>Transactions
can span multiple journals if the destinations are distributed. This means that two phase
completi
 on is necessary, which does impose a performance (additional disk sync) penalty to record
the commit outcome. This penalty is only imposed if more than one journal is involved in a
transaction.</p><h3 id="KahaDB-Configuration.1">Configuration</h3><p>Each
instance of&#160;<strong><code>kahaDB</code></strong> can be configured
independently. If no destination is supplied to a <strong><code>filteredKahaDB</code></strong>,
the implicit default value will match any destination, queue or topic. This is a handy catch
all. If no matching persistence adapter can be found, destination creation will fail with
an exception. The <strong><code>filteredKahaDB</code></strong> shares
its wildcard matching rules with <a shape="rect" href="per-destination-policies.html">Per
Destination Policies</a>.</p><p>From ActiveMQ 5.15, <strong><code>filteredKahaDB</code></strong>&#160;support
a StoreUsage&#160;attribute named <strong><code>usage</code></strong>.
This allows individual disk limits to be imposed o
 n matching queues.</p><div class="code panel pdl" style="border-width: 1px;"><div
class="codeContent panelContent pdl">
 <pre class="brush: xml; gutter: false; theme: Default" style="font-size:12px;">&lt;broker
brokerName="broker"&gt;
 
 &#160;&lt;persistenceAdapter&gt;
@@ -100,6 +100,9 @@
     &lt;filteredPersistenceAdapters&gt;
       &lt;!-- match all queues --&gt;
       &lt;filteredKahaDB queue="&gt;"&gt;
+        &lt;usage&gt;
+         &lt;storeUsage limit="1g" /&gt;
+        &lt;/usage&gt;
         &lt;persistenceAdapter&gt;
           &lt;kahaDB journalMaxFileLength="32mb"/&gt;
         &lt;/persistenceAdapter&gt;



Mime
View raw message