activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r924029 - in /websites/production/activemq/content: cache/main.pageCache why-do-kahadb-log-files-remain-after-cleanup.html
Date Mon, 29 Sep 2014 12:21:13 GMT
Author: buildbot
Date: Mon Sep 29 12:21:13 2014
New Revision: 924029

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/why-do-kahadb-log-files-remain-after-cleanup.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/why-do-kahadb-log-files-remain-after-cleanup.html
==============================================================================
--- websites/production/activemq/content/why-do-kahadb-log-files-remain-after-cleanup.html
(original)
+++ websites/production/activemq/content/why-do-kahadb-log-files-remain-after-cleanup.html
Mon Sep 29 12:21:13 2014
@@ -81,37 +81,16 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><p>Cleanup of unreferenced KahaDB journal
log files data-&lt;id&gt;.log will occur every 30seconds by deafault. If a data file
is in-use it will not be cleaned up.<br clear="none">
-The definition of in-use is many fold. In the simplest case, a data file is in-use if it
contains a pending message for a destination or durable topic subscription. If it does not
contain any message, it may contain acks for messages that are in data files that are in-use,
in which case it cannot be removed (b/c a recovery with missing acks would result in redelivery).
<br clear="none">
-If the journal references a pending transaction it cannot be removed till that transaction
completes. Finally, if a data file is the current journal file, it is considered in-use as
there may be a pending write to that journal file.</p>
-
-<p>The trace level logging of the org.apache.activemq.store.kahadb.MessageDatabase
class provides insight into the cleanup process and will allow you to determine why a given
data file is considered in-use and as a result, not a candidate for cleanup.</p>
-
-<p>To debug, add the following (or similar) to your log4j.properties file (if needed):
</p>
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
+<div class="wiki-content maincontent"><p>Clean-up of unreferenced KahaDB journal
log files data-&lt;id&gt;.log will occur every 30 seconds by default. If a data file
is in-use it will not be cleaned up.</p><p>A data file may be in-use because:</p><ol><li>It
contains a&#160;pending message for a destination or durable topic subscription</li><li>It
contains an ack for a message which is in an in-use data file - the ack cannot be removed
as a recovery would then mark the message for redelivery</li><li>The journal references
a pending transaction</li><li>It is a journal file, and there may be a pending
write to it</li></ol><p><span style="line-height: 1.4285715;">The
trace level logging of the org.apache.activemq.store.kahadb.MessageDatabase class provides
insight into the cleanup process and will allow you to determine why a given data file is
considered in-use and as a result, not a candidate for cleanup.</span></p><p>To
debug, add the following (or similar) to your log4j.properties f
 ile (if needed):</p><div class="code panel pdl" style="border-width: 1px;"><div
class="codeContent panelContent pdl">
 <script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[log4j.appender.kahadb=org.apache.log4j.RollingFileAppender

 log4j.appender.kahadb.file=${activemq.base}/data/kahadb.log 
 log4j.appender.kahadb.maxFileSize=1024KB 
 log4j.appender.kahadb.maxBackupIndex=5 
 log4j.appender.kahadb.append=true 
 log4j.appender.kahadb.layout=org.apache.log4j.PatternLayout 
-log4j.appender.kahadb.layout.ConversionPattern=%d [%-15.15t] %-5p 
-%-30.30c{1} - %m%n 
+log4j.appender.kahadb.layout.ConversionPattern=%d [%-15.15t] %-5p %-30.30c{1} - %m%n 
 log4j.logger.org.apache.activemq.store.kahadb.MessageDatabase=TRACE, kahadb]]></script>
-</div></div> 
-
-
-<p>Either restart AMQ and let the cleanup process run (give it a minute or two for
example) or alternatively apply this logging configuration to a running broker via JMX. The
"Broker" MBean exposes an operation called "reloadLog4jProperties" in JMX that can be used
to tell the broker to reload its log4j.properties. Often its enough to apply this logging
configuration for 2-5 minutes and then analyze the broker's log file.</p>
-
-
-<p>Examine the log file and look for cleanup of the data files. The process starts
with the complete set of known data files and queries the index on a per destination basis
to prune this list. Anything that remains is a candidate for cleanup.<br clear="none">
-The trace logging gives the destination and the log file numbers that remain candidates for
removal as it iterates through the index. </p>
-
-<p>At some point you'll hit a destination and the number of data file ids will suddenly
drop because that destination references them. It could be a DLQ or an offline durable subscriber.
<br clear="none">
-In any event, the logging will help you pinpoint the destinations that are hogging disk space.</p>
-
-<p>Here is a quick sample</p>
-<div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent
pdl">
+</div></div><p>Either restart AMQ and let the cleanup process run (give
it a minute or two for example) or alternatively apply this logging configuration to a running
broker via JMX. The "Broker" MBean exposes an operation called "reloadLog4jProperties" in
JMX that can be used to tell the broker to reload its log4j.properties. Often its enough to
apply this logging configuration for 2-5 minutes and then analyze the broker's log file.</p><p>Examine
the log file and look for cleanup of the data files. The process starts with the complete
set of known data files and queries the index on a per destination basis to prune this list.
Anything that remains is a candidate for cleanup.<br clear="none"> The trace logging
gives the destination and the log file numbers that remain candidates for removal as it iterates
through the index.</p><p>At some point you'll hit a destination and the number
of data file ids will suddenly drop because that destination references them. It could be
a DLQ or an
  offline durable subscriber. <br clear="none"> In any event, the logging will help
you pinpoint the destinations that are hogging disk space.</p><p>Here is a quick
sample</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent
panelContent pdl">
 <script class="theme: Default; brush: java; gutter: false" type="syntaxhighlighter"><![CDATA[
TRACE | Last update: 164:41712, full gc candidates set: [86, 87, 163, 164] | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker
  TRACE | gc candidates after first tx:164:41712, [86, 87, 163] | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker
  TRACE | gc candidates after dest:0:A, [86, 87, 163] | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker
@@ -123,9 +102,7 @@ In any event, the logging will help you 
  TRACE | gc candidates after dest:0:J, [87] | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker
  TRACE | gc candidates: [87] | org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ
Journal Checkpoint Worker
  DEBUG | Cleanup removing the data files: [87] | org.apache.activemq.store.kahadb.MessageDatabase
| ActiveMQ Journal Checkpoint Worker]]></script>
-</div></div>
-<p>We get one candidate, data-87.log from the existing set of journal data files <code>[86,
87, 163, 164]</code>. There is a current transaction using 164, destination (Queue named
E) <code>'0\:E'</code> has some messages in 163, destination <code>'0:I'</code>
has messages in 86 and 87 is unreferenced. In this case, there must be some long standing
unacked messages or a very slow consumer on destination <code>'0:I'</code>.<br
clear="none">
-The <code>'0:'</code> prefix is shorthand for a queue, <code>'1:'</code>
for a topic, i.e: <code>dest:1:B</code> is a topic named B.</p></div>
+</div></div><p>We get one candidate, data-87.log from the existing set
of journal data files <code>[86, 87, 163, 164]</code>. There is a current transaction
using 164, destination (Queue named E) <code>'0\:E'</code> has some messages in
163, destination <code>'0:I'</code> has messages in 86 and 87 is unreferenced.
In this case, there must be some long standing unacked messages or a very slow consumer on
destination <code>'0:I'</code>.<br clear="none"> The <code>'0:'</code>
prefix is shorthand for a queue, <code>'1:'</code> for a topic, i.e: <code>dest:1:B</code>
is a topic named B.</p></div>
         </td>
         <td valign="top">
           <div class="navigation">



Mime
View raw message