activemq-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gary Tully (JIRA)" <>
Subject [jira] [Commented] (AMQ-6654) Durable subscriber pendingQueueSize not flushed with KahaDB after force-kill of broker
Date Mon, 10 Apr 2017 22:40:41 GMT


Gary Tully commented on AMQ-6654:

this may be resolved via AMQ-6652

> Durable subscriber pendingQueueSize not flushed with KahaDB after force-kill of broker
> --------------------------------------------------------------------------------------
>                 Key: AMQ-6654
>                 URL:
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: KahaDB
>    Affects Versions: 5.14.4
>         Environment: Reproducible in Linux and Windows
>            Reporter: Justin Reock
>         Attachments: localhost___Durable_Topic_Subscribers.jpg
> This is related to AMQ-5960, marked as fixed, but the issue persists.  
> It is very easy to reproduce.
> 1)  Start up ActiveMQ
> 2)  Produce load into a topic
> 3)  Connect a few durable subscribers to the topic
> 4)  Force-terminate the running broker instance 
> 5)  Restart the broker (without load)
> 6)  Allow durable subscribers to reconnect, and attempt to drain the durable subscription
> In almost every case, you will see the pending queue sizes of the durable subscribers
with lingering "messages."  I say that in quotes, because I have been able to prove that all
the messages are in fact delivered to the clients, there's no message loss, but, KahaDB still
thinks that there are messages waiting to be dispatched.
> This causes KahaDB to be unable to clean up extents and ultimately will cause the KahaDB
store to grow out of control, hence the "Major" severity despite no actual message loss occurring.
> The only way to recover from the situation is to delete and recreate the subscriber,
which does allow KahaDB to clean itself up.
> I have tried several things, including disabling the new ackCompation functionality,
significantly reducing the time between checkpoints, reducing the size of the index cache
to force more frequent flushes to disk, but none of those completely eliminate the problem.
> This does not happen with LevelDB, but, of course LevelDB has been deprecated, so it's
not a good solution to switch to that.  Does not happen with JDBC either, but, JDBC is as
we know significantly slower than KahaDB, so ideally we'd see this fixed in KahaDB.

This message was sent by Atlassian JIRA

View raw message