activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Shannon <>
Subject Re: Random Access Queues, possible?
Date Wed, 09 Jan 2019 17:20:50 GMT
I meant to add to my previous response that if a consumer with a selector
is stuck because no paged in messages match that selector then the only way
for that consumer to get more messages is if other consumers come online
and process those messages that are paged in. At that point more messages
can be paged in from the store but there's no way to scan and look for
other messages until the paged in ones are processed first.

On Wed, Jan 9, 2019 at 12:18 PM Christopher Shannon <> wrote:

> Messages are processed in order, selectors work by skipping over messages
> that a consumer can't consume.  In fact this is a major issue with
> selectors and a consumer.  If a consumer is online with a selector and all
> of the paged in messages into memory don't match that selector then that
> consumer gets stuck, precisely because there is no random access and
> messages have to be processed in order.
> On Wed, Jan 9, 2019 at 6:17 AM Andreas Mueller <> wrote:
>> > On 8. Jan 2019, at 19:32, Arthur Naseef <> wrote:
>> >
>> > With all of that said, I am curious to know what motivations exist to
>> drive
>> > this request.
>> Well, this is the engine:
>> More details here:
>> The big advantage is that it turns a broker into a streaming analytics
>> engine. It is just part of the broker, no need to install anything. We have
>> some (not yet released) tools on top of it like dynamic dashboards, flow
>> programming and orchestration etc.
>> Being part of a broker makes Streams unique. It makes a broker
>> scriptable. Application logic can run within the broker, brokers can be
>> provisioned with a set of Streams to fulfill dedicated tasks. With
>> orchestration this can be done dynamically. Start a naked broker, push the
>> Streams, done.
>> All these advantages go away when I don’t use broker resources but, e.g.
>> mapDB and communicate over standard protocols. This requires additional
>> installs, having multiple databases (broker persistent store plus mapDB
>> files), no HA consistency. Streams will then be in direct competition with
>> Apache Flink. That’s what I want to avoid because I see Streams as kind of
>> bred and butter analytics that can be used to analyze existing message
>> flows on the fly. If one needs more, install and use Flink + Kafka.
>> Therefore, it only makes sense to me if I can wire the Stream Interface
>> with Artemis internals.
>> Cheers
>> Andreas
>> --
>> Andreas Mueller
>> IIT Software GmbH
>> IIT Software GmbH
>> Falkenhorst 11, 48155 Münster, Germany
>> Phone: +49 (0)251 39 72 99 00
>> Managing Director: Andreas Müller
>> District Court: Amtsgericht Münster, HRB 16294
>> VAT-No: DE199945912
>> This e-mail may contain confidential and/or privileged information. If
>> you are not the intended recipient (or have received this e-mail in error)
>> please notify the sender immediately and destroy this e-mail. Any
>> unauthorized copying, disclosure or distribution of the material in this
>> e-mail is strictly forbidden.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message