activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sergiy Barlabanov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMQ-5542) KahaDB data files containing acknowledgements are deleted during cleanup
Date Wed, 28 Jan 2015 00:18:35 GMT

    [ https://issues.apache.org/jira/browse/AMQ-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294463#comment-14294463
] 

Sergiy Barlabanov commented on AMQ-5542:
----------------------------------------

I think, you are right. Otherwise KahaDB will loose acks and replay messages.
If this is true, than the current cleanup mechanism have to be reconsidered. It will not work
well for scenarios, where messages may stay unconsumed for some time. Some sort of compaction
has to be done.
In the current project we have nearly always some messages in DLQs sitting there for maximum
2 days. This means that we would always have nearly all data files for the last two days.
Currently it is ok, we have enough of place on the SAN. But what if that would be 2 weeks
instead of 2 days?
I think in this case we would use JDBC storage.
Another possibility would be to use mKahaDB and put DLQs to a separate storage. That storage
would not grow fast since there would not be much traffic.

> KahaDB data files containing acknowledgements are deleted during cleanup
> ------------------------------------------------------------------------
>
>                 Key: AMQ-5542
>                 URL: https://issues.apache.org/jira/browse/AMQ-5542
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Message Store
>    Affects Versions: 5.10.0, 5.10.1
>            Reporter: Sergiy Barlabanov
>         Attachments: AMQ-5542.patch, AdjustedAMQ2832Test.patch
>
>
> AMQ-2832 was not fixed cleanly.
> The commit dd68c61e65f24b7dc498b36e34960a4bc46ded4b by Gary from 8.10.2010 introduced
a problem by deleting too many files.
> Scenarios we are facing currently in production:
> Data file #1 contains unconsumed messages sitting in a DLQ. So this file is not a cleanup
candidate.
> The next file #2 contains acks of some messages from file #1. This file is not a cleanup
candidate (because of ackMessageFileMap logic).
> The next file #3 contains acks of some messages from file #2. And this file is deleted
during the cleanup procedure. So on Broker restart all messages from #2, whose acks were from
the deleted file #3, are replayed!
> The reason is gcCandidates variable, which is a copy of gcCandidateSet (see MessageDatabase#checkpointUpdate
at the end of the method - org/apache/activemq/store/kahadb/MessageDatabase.java:1659 on 5.10.0
tag). So when a candidate is deleted from gcCandidateSet (org/apache/activemq/store/kahadb/MessageDatabase.java:1668
on 5.10.0 tag), gcCandidates still contains that candidate and the comparison on org/apache/activemq/store/kahadb/MessageDatabase.java:1666
works wrong!
> I will try to adjust AMQ2832Test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message