activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Shannon <christopher.l.shan...@gmail.com>
Subject Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0
Date Wed, 07 Dec 2016 21:11:37 GMT
+1 for merging the branch into master after the cleanup is done and bumping
to 2.0 since it is a major architecture change.


On Wed, Dec 7, 2016 at 3:31 PM, Justin Bertram <jbertram@apache.com> wrote:

> Essential feature parity with 5.x (where it makes sense) has been a goal
> all along, but I think waiting until such parity exists before the next
> major release means the community will be waiting quite a bit longer than
> they already have.  Meanwhile, new functionality that could benefit the
> community will remain unavailable.  In any event, "feature parity" is a bit
> vague.  If there is something specific with regards to 5.x parity that
> you're looking for then I think you should make that explicit so it can be
> evaluated.
>
> I'm in favor of merging the addressing changes onto master, hardening
> things up a bit, and then releasing.
>
>
> Justin
>
> ----- Original Message -----
> From: "Matt Pavlovich" <mattrpav@gmail.com>
> To: dev@activemq.apache.org
> Sent: Wednesday, December 7, 2016 2:04:13 PM
> Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component
> removal and potential 2.0.0
>
> IMHO, I think it would be good to kick up a thread on what it means to
> be 2.0. It sounds like the addressing changes definitely warrant it on
> its own, but I'm thinking having ActiveMQ 5.x feature parity would be a
> good goal for the 2.0 release.  My $0.02
>
> On 12/7/16 2:56 PM, Clebert Suconic wrote:
> > +1000
> >
> >
> > It needs one final cleanup before it can be done though.. these commit
> > messages need meaninful descriptions.
> >
> > if Justin or Martyn could come up with that since they did most of the
> > work on the branch.
> >
> > This will really require bumping the release to 2.0.0 (there's a
> > 2.0.snapshot commit on it already).  I would merge this into master,
> > and fork the current master as 1.x.
> >
> >
> >
> >
> > On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish <tabish121@gmail.com>
> wrote:
> >> This would be a good time to move to master, would allow other to more
> >> easily get onboard
> >>
> >>
> >> On 12/07/2016 01:25 PM, Clebert Suconic wrote:
> >>> I have rebased ARTEMIS-780 in top of master. There was a lot of
> >>> conflicts...
> >>>
> >>> I have aggregated/squashed most of the commits by chronological order
> >>> almost. So if Martyn had 10 commits in series I had squashed all of
> >>> them, since they were small comments anyways. The good thing about
> >>> this is that nobody would lose authorship of these commits.
> >>>
> >>> We will need to come up with more meaningful messages for these
> >>> commits before we can merge into master. But this is getting into a
> >>> very good shape. I'm impressed by the amount of work I see done on
> >>> this branch. Very well done guys! I mean it!
> >>>
> >>> Also, I have saved the old branch before I pushed -f into my fork as
> >>> old-ARTEMIS-780 in case I broke anything on the process. Please check
> >>> everything and let me know if I did.
> >>>
> >>>
> >>> And please rebase more often on this branch unless you merge it soon.
> >>>
> >>>
> >>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic
> >>> <clebert.suconic@gmail.com> wrote:
> >>>> If / when we do the 2.0 bump, I would like to move a few classes.
> >>>> Mainly under server.impl... I would like to move activations under a
> >>>> package for activation, replicationendpoints for a package for
> >>>> replications...    some small stuff like that just to reorganize
> >>>> little things like this a bit.
> >>>>
> >>>> We can't do that now as that would break API and compatibility, but
if
> >>>> we do the bump, I would like to make that simple move.
> >>>>
> >>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor <mtaylor@redhat.com>
> >>>> wrote:
> >>>>> Hi Matt,
> >>>>>
> >>>>> Comments inline.
> >>>>>
> >>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich <mattrpav@gmail.com>
> >>>>> wrote:
> >>>>>
> >>>>>> Martyn-
> >>>>>>
> >>>>>> I think you nailed it here-- well done =)
> >>>>>>
> >>>>>> My notes in-line--
> >>>>>>
> >>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote:
> >>>>>>
> >>>>>>> 1. Ability to route messages to queues with the same address,
but
> >>>>>>> different
> >>>>>>> routing semantics.
> >>>>>>>
> >>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model
that
> >>>>>>> introduces
> >>>>>>> an address object at the configuration and management layer.
In the
> >>>>>>> proposal it is not possible to create 2 addresses with different
> >>>>>>> routing
> >>>>>>> types. This causes a problem with existing clients (JMS,
STOMP and
> for
> >>>>>>> compatability with other vendors).
> >>>>>>>
> >>>>>>> Potential Modification: Addresses can have multiple routing
type
> >>>>>>> “endpoints”, either “multicast” only, “anycast”
only or both. The
> >>>>>>> example
> >>>>>>> below would be used to represent a JMS Topic called “foo”,
with a
> >>>>>>> single
> >>>>>>> subscription queue and a JMS Queue called “foo”. N.B.
The actual
> XML
> >>>>>>> is
> >>>>>>> just an example, there are multiple ways this could be represented
> >>>>>>> that we
> >>>>>>> can define later.
> >>>>>>>
> >>>>>>> <*addresses*>   <*address* *name**="foo"*> 
    <*anycast*>
> >>>>>>> <*queues*>            <*queue* *name**="**foo”*
/>
>  </*queues*>
> >>>>>>>         </*anycast*>      <*mulcast*>      
  <*queues*>
> >>>>>>> <*queue* *name**="my.topic.subscription" */>     
   </*queues*>
> >>>>>>> </*multicast*>   </*address*></*addresses*>
> >>>>>>>
> >>>>>> I think this solves it. The crux of the issues (for me) boils
down
> to
> >>>>>> auto-creation of destinations across protocols. Having this
show up
> in
> >>>>>> the
> >>>>>> configs would give developers and admins more information to
> >>>>>> troubleshoot
> >>>>>> the mixed address type+protocol scenario.
> >>>>>>
> >>>>>> 2. Sending to “multicast”, “anycast” or “all”
> >>>>>>> As mentioned earlier JMS (and other clients such as STOMP
via
> >>>>>>> prefixing)
> >>>>>>> allow the producer to identify the type of end point it
would like
> to
> >>>>>>> send
> >>>>>>> to.
> >>>>>>>
> >>>>>>> If a JMS client creates a producer and passes in a topic
with
> address
> >>>>>>> “foo”. Then only the queues associated with the “multicast”
> section of
> >>>>>>> the
> >>>>>>> address. A similar thing happens when the JMS producer sends
to a
> >>>>>>> “queue”
> >>>>>>> messages should be distributed amongst the queues associated
with
> the
> >>>>>>> “anycast” section of the address.
> >>>>>>>
> >>>>>>> There may also be a case when a producer does not identify
the
> >>>>>>> endpoint
> >>>>>>> type, and simply sends to “foo”. AMQP or MQTT may want
to do this.
> In
> >>>>>>> this
> >>>>>>> scenario both should happen. All the queues under the multicast
> >>>>>>> section
> >>>>>>> get
> >>>>>>> a copy of the message, and one queue under the anycast section
gets
> >>>>>>> the
> >>>>>>> message.
> >>>>>>>
> >>>>>>> Modification: None Needed. Internal APIs would need to be
updated
> to
> >>>>>>> allow
> >>>>>>> this functionality.
> >>>>>>>
> >>>>>> I think the "deliver to all" scenario should be fine. This seems
> >>>>>> analogous
> >>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through
some
> >>>>>> scenarios
> >>>>>> and report back any gotchas.
> >>>>>>
> >>>>>> 3. Support for prefixes to identify endpoint types
> >>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients
from
> alternate
> >>>>>>> vendors, identify the endpoint type (in producer and consumer)
> using a
> >>>>>>> prefix notation.
> >>>>>>>
> >>>>>>> e.g. queue:///foo
> >>>>>>>
> >>>>>>> Which would identify:
> >>>>>>>
> >>>>>>> <*addresses*>   <*address* *name**="foo"*> 
    <*anycast*>
> >>>>>>> <*queues*>            <*queue* *name**="my.foo.queue"
*/>
> >>>>>>> </*queues*>      </*anycast*>   </*address*></*addresses*>
> >>>>>>>
> >>>>>>> Modifications Needed: None to the model. An additional parameter
to
> >>>>>>> the
> >>>>>>> acceptors should be added to identify the prefix.
> >>>>>>>
> >>>>>> Just as a check point in the syntax+naming convention in your
> provided
> >>>>>> example... would the name actually be:
> >>>>>>
> >>>>>> <*queue* *name**="foo" .. vs "my.foo.queue" ?
> >>>>>>
> >>>>> The queue name can be anything.  It's the address that is used by
> >>>>> consumer/producer.  The protocol handler / broker will decided which
> >>>>> queue
> >>>>> to connect to.
> >>>>>
> >>>>>> 4. Multiple endpoints are defined, but client does not specify
> >>>>>> “endpoint
> >>>>>>> routing type” when consuming
> >>>>>>>
> >>>>>>> Handling cases where consumers does not pass enough information
in
> >>>>>>> their
> >>>>>>> address or via protocol specific mechanisms to identify
an
> endpoint.
> >>>>>>> Let’s
> >>>>>>> say an AMQP client, requests to subscribe to the address
“foo”, but
> >>>>>>> passes
> >>>>>>> no extra information. In the cases where there are only
a single
> >>>>>>> endpoint
> >>>>>>> type defined, the consumer would associated with that endpoint
> type.
> >>>>>>> However, when both endpoint types are defined, the protocol
handler
> >>>>>>> does
> >>>>>>> not know whether to associate this consumer with a queue
under the
> >>>>>>> “anycast” section, or whether to create a new queue
under the
> >>>>>>> “multicast”
> >>>>>>> section. e.g.
> >>>>>>>
> >>>>>>> Consume: “foo”
> >>>>>>>
> >>>>>>> <*addresses*>
> >>>>>>>
> >>>>>>>       <*address* *name**="foo"*>      <*anycast*>
>  <*queues*>
> >>>>>>>          <*queue* *name**="**foo”* />         </*queues*>
> >>>>>>> </*anycast*>      <*multicast*>         <*queues*>
> <*queue*
> >>>>>>> *name**="my.topic.subscription" */>         </*queues*>
> >>>>>>> </*multicast*>   </*address*></*addresses*>
> >>>>>>>
> >>>>>>> In this scenario, we can make the default configurable on
the
> >>>>>>> protocol/acceptor. Possible options for this could be:
> >>>>>>>
> >>>>>>> “multicast”: Defaults multicast
> >>>>>>>
> >>>>>>> “anycast”: Defaults to anycast
> >>>>>>>
> >>>>>>> “error”: Returns an error to the client
> >>>>>>>
> >>>>>>> Alternatively each protocol handler could handle this in
the most
> >>>>>>> sensible
> >>>>>>> way for that protocol. MQTT might default to “multicast”,
STOMP
> >>>>>>> “anycast”,
> >>>>>>> and AMQP to “error”.
> >>>>>>>
> >>>>>> Yep, this works great. I think there are two flags on the
> acceptors..
> >>>>>> one
> >>>>>> for auto-create and one for default handling of name collision.
The
> >>>>>> defaults would most likely be the same.
> >>>>>>
> >>>>>> Something along the lines of:
> >>>>>> auto-create-default = "multicast | anycast"
> >>>>>> no-prefix-default = "multicast | anycast | error"
> >>>>>>
> >>>>>> 5. Fully qualified address names
> >>>>>>> This feature allows a client to identify a particular address
on a
> >>>>>>> specific
> >>>>>>> broker in a cluster. This could be achieved by the client
using
> some
> >>>>>>> form
> >>>>>>> of address as:
> >>>>>>>
> >>>>>>> queue:///host/broker/address/
> >>>>>>>
> >>>>>>> Matt could you elaborate on the drivers behind this requirement
> >>>>>>> please.
> >>>>>>>
> >>>>>>> I am of the opinion that this is out of the scope of the
addressing
> >>>>>>> changes, and is more to do with redirecting in cluster scenarios.
> The
> >>>>>>> current model will support this address syntax if we want
to use
> it in
> >>>>>>> the
> >>>>>>> future.
> >>>>>>>
> >>>>>> I agree that tackling the impl of this should be out-of-scope.
My
> >>>>>> recommendation is to consider it in addressing now, so we can
> hopefully
> >>>>>> avoid any breakage down the road.
> >>>>>>
> >>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS,
etc)
> is
> >>>>>> the
> >>>>>> ability to fully address a destination using a format similar
to
> this:
> >>>>>>
> >>>>>> queue://brokerB/myQueue
> >>>>>>
> >>>>> The advantage of this is to allow for scaling of the number of
> >>>>> destinations
> >>>>>> and allows for more dynamic broker networks to be created without
> >>>>>> applications having to have connection information for all brokers
> in a
> >>>>>> broker network. Think simple delivery+routing, and not horizontal
> >>>>>> scaling.
> >>>>>> It is very analogous to SMTP mail routing.
> >>>>>>
> >>>>>> Producer behavior:
> >>>>>>
> >>>>>> 1. Client X connects to brokerA and sends it a message addressed:
> >>>>>> queue://brokerB/myQueue
> >>>>>> 2. brokerA accepts the message on behalf of brokerB and handles
all
> >>>>>> acknowledgement and persistence accordingly
> >>>>>> 3. brokerA would then store the message in a "queue" for brokerB.
> Note:
> >>>>>> All messages for brokerB are generally stored in one queue--
this is
> >>>>>> how it
> >>>>>> helps with destination scaling
> >>>>>>
> >>>>>> Broker to broker behavior:
> >>>>>>
> >>>>>> There are generally two scenarios: always-on or periodic-check
> >>>>>>
> >>>>>> In "always-on"
> >>>>>> 1. brokerA looks for a brokerB in its list of cluster connections
> and
> >>>>>> then
> >>>>>> sends all messages for all queues for brokerB (or brokerB pulls
all
> >>>>>> messages, depending on cluster connection config)
> >>>>>>
> >>>>>> In "periodic-check"
> >>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time
> interval
> >>>>>> and then receives any messages that have arrived since last
check
> >>>>>>
> >>>>>> TL;DR;
> >>>>>>
> >>>>>> It would be cool to consider remote broker delivery for messages
> while
> >>>>>> refactoring the address handling code. This would bring Artemis
> inline
> >>>>>> with
> >>>>>> the rest of the commercial EMS brokers. The impact now, hopefully,
> is
> >>>>>> minor
> >>>>>> and just thinking about default prefixes.
> >>>>>>
> >>>>> Understood, from our conversations on IRC I can see why this might
be
> >>>>> useful.
> >>>>>
> >>>>>> Thanks,
> >>>>>> -Matt
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>> --
> >>>> Clebert Suconic
> >>>
> >>>
> >>
> >> --
> >> Tim Bish
> >> twitter: @tabish121
> >> blog: http://timbish.blogspot.com/
> >>
> >
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message