activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Bain <tb...@alumni.duke.edu>
Subject Re: mKahaDB and durable subscriptions
Date Tue, 29 Dec 2015 13:49:01 GMT
I meant the subscription message itself (and I shouldn't have included the
word "offline"), based on the belief that that message would be kept
forever.  Your response in the other thread makes it clear that that's not
currently done, which avoids the problem of preventing data files from
being deleted but obviously creates its own problem (index files can't be
recreated if durable subscriptions are in use).

What I asked about here would address both issues: by storing all durable
subscription messages (and only the durable subscription messages) in a
separate KahaDB instance, we could avoid deleting the durable subscription
messages (which allows index rebuilds) while separating the messages that
have vastly different lifecycles (so we don't keep the primary KahaDB
instance's data files alive).

Tim
On Dec 28, 2015 10:34 AM, "Christopher Shannon" <
christopher.l.shannon@gmail.com> wrote:

> Multi-KahaDB already separates destinations (if configured that way) into
> their own KahaDB store, so I'm not really sure what you mean by separate
> durable subscriptions.  If a topic has a bunch of offline durables and that
> topic is configured to have its own KahaDB instance then it would not
> affect other destinations and their messages and those other destinations
> would continue to GC properly.
>
> On Fri, Dec 25, 2015 at 10:20 AM, Tim Bain <tbain@alumni.duke.edu> wrote:
>
> > Is it currently possible to configure multi-KahaDB such that all offline
> > durable subscriptions (for all destinations) go into one KahaDB instance
> > while all other messages go into a second instance?
> >
> > Durable subscription messages seem to break the assumption implicit in
> > KahaDB's design decision to never compact data files, which is that as
> long
> > as consumers are keeping up, all messages in a journal file will quickly
> > become unneeded and the file can always be deleted in lieu of compacting
> > it.  And as has been discussed previously on this mailing list, it's
> > possible for a single old data file to keep alive every data file after
> it
> > if message-ack pairs span the data files, so a single durable
> subscription
> > can theoretically prevent KahaDB from ever deleting another data file
> > (which would be a bug, since correct operation of KahaDB is for data
> files
> > to be deleted when they contain no unconsumed messages).
> >
> > Being able to push them into a separate KahaDB instance would make the
> > assumption valid for the remaining messages in the store, but if that's
> not
> > currently possible then one of the two features (mKahaDB for durable
> > subscriptions, or compaction) needs to be implemented in 5.14.0.
> >
> > Tim
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message