activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob Davies <>
Subject Re: OOM with high KahaDB index time
Date Tue, 19 Jan 2010 06:41:30 GMT

On 18 Jan 2010, at 22:14, Daniel Kluesing wrote:

> Hi,
> I'm running the 5.3 release as a standalone broker. In one case, a  
> producer is running without a consumer, producing small, persistent  
> messages, with the FileCursor pendingQueuePolicy (per

>  option and flow control memoryLimit set to 100mb for the queue in  
> question. (Through a policy entry)
> As the queue grows above 300k messages, KahaDB indexing starts  
> climbing above 1 second. At around 350k messages, the indexing is  
> taking over 8 seconds. At this point, I start getting java out of  
> heap space errors in essentially random parts of the code. After a  
> while, the producers timeout with a channel inactive for too long  
> error, and the entire broker basically wedges itself. At this point,  
> consumers are generally unable to bind to the broker quitting with  
> timeout errors. When they can connect, consuming a single message  
> triggers an index re-build, which takes 2-8seconds. Turning on  
> verbose garbage collection, the jvm is collecting like mad but  
> reclaiming no space.
> If I restart the broker, it comes back up, I can consume the old  
> messages, and can handle another 350k messages until it wedges.
> I can reproduce under both default gc and incremental gc.
> Two questions:
> - It seems like someone is holding onto a handle to the messages  
> after they have been persisted to disk - is this a known issue?  
> Should I open a JIRA for it? (Or is there another explanation?)
> - Is there any documentation about the internals of KahaDB - the  
> kind of indices etc? I'd like to get a better understanding of the  
> index performance and in general how KahaDB compares to something  
> like BerkeleyDB.
> Thanks

There's is some confusion over naming of our persistence options that  
doesn't help. There is Kaha - which uses multiple log files and a Hash  
based index - this is currently used by the FileCursor - whilst KahaDB  
is a newer implementation, which is more robust and typically uses a  
BTreeIndex. There is currently a new implementation of the Filecursor  
btw - but that's a different matter. You can't currently configure the  
HashIndex via the FileCursor -  but it looks like this is the problem  
you are encountering - as it looks like you need to increase the max  
hash buckets.

So I would recommend the following
1. Use the default pendingQueuePolicy (which only uses a FileCursor  
for non-persistent messages - and uses the underlying database for  
persistent messages
2. Try KahaDB - which - with the BTreeIndex - will not hit the  
problems you are seeing with the Filecursor

or - increase the maximum number of hash buckets for the FileCursor  
index - by setting a Java system property -  maximumCapacity to 65536  
(the default is 16384)


I work here:
My Blog:
I'm writing this:

View raw message