Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 27A63200BD8 for ; Wed, 7 Dec 2016 22:13:31 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 2615F160B0C; Wed, 7 Dec 2016 21:13:31 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 207DE160AF9 for ; Wed, 7 Dec 2016 22:13:29 +0100 (CET) Received: (qmail 82199 invoked by uid 500); 7 Dec 2016 21:13:29 -0000 Mailing-List: contact dev-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list dev@activemq.apache.org Received: (qmail 82168 invoked by uid 99); 7 Dec 2016 21:13:28 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Dec 2016 21:13:28 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id F0F52C8A2C for ; Wed, 7 Dec 2016 21:13:27 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.179 X-Spam-Level: ** X-Spam-Status: No, score=2.179 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, KAM_LIVE=1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id yzC7SNxFolOw for ; Wed, 7 Dec 2016 21:13:22 +0000 (UTC) Received: from mail-qt0-f173.google.com (mail-qt0-f173.google.com [209.85.216.173]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 107E7612B0 for ; Wed, 7 Dec 2016 21:12:25 +0000 (UTC) Received: by mail-qt0-f173.google.com with SMTP id w33so392453564qtc.3 for ; Wed, 07 Dec 2016 13:12:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=3cx2KplPPTkA7T41qxKZD7oR3s3gF3TdX9C2V9v+d8k=; b=tQgu7wtIbRYihg8hT7mSakhzI4CuyyZsuOnf4Xx1rVbMdczCRt7b/TQ1hranK/Z5yP 8KHSQ0A4FiN+8sFdvfe9EgtysGtzUm0TNNjOMm9E1Z8dRrEcSod+IxBzQ+khNhCAtqz0 GCW2rROO9D+LOx5di8RaDeazMILjO3068RRoF3T7YrmUOsGx/cq9i4mBMnQ3sOKtWkUF cIT5X9280gaESo0AOqm6/Jnwbm5yDv40J8PP59twRKr7SlhQ32qyCSUGdMZSLsywZHGO d2MmNrq6bSRaFSzwwlenDzyB2dqoZAT3yBahMsZrzhNVTkR319BtRRBmg44q7Lgbw+Fz DCTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=3cx2KplPPTkA7T41qxKZD7oR3s3gF3TdX9C2V9v+d8k=; b=E3RVr3dvOKIyRiqMuwoLS4V7d515hUuZDWVcTBIZjhTL1+bi1WMW+7E3HbDEoT4BUR 3WL4KeN28d+xZRoXj0bDm1okAHsvE1flQgHQCI/F4TzI3vQJX1zXiObXzO25O8Cx1Bfb MRNeyg2+6N+FuadhWrRqrn0Y6blzhEXPqQ6EOAHcjeIbBQ22jw8q/Nox/gQdZXCRURv0 LnMJ0jF6x4bDGoBfmiSGNhZxedEA2Mc65Pq+NEIUQoew88BP8DHcfZkWfyPIn9O+SJgz j4Ukhr+jMFQm3LbewiN/J08KTVCcNy5ernNXHMktjnnVi+dBbohtXYgc+WVdjwO9B3ND duhw== X-Gm-Message-State: AKaTC02lCpj/SddSvep+PEjPNMZziV0clJdUbtSFgYHeJdX5jhcYRyFuCFI4gm9EmupDswpyfojwaqkb24G6ww== X-Received: by 10.37.221.135 with SMTP id u129mr35900236ybg.36.1481145127901; Wed, 07 Dec 2016 13:12:07 -0800 (PST) MIME-Version: 1.0 Received: by 10.37.81.133 with HTTP; Wed, 7 Dec 2016 13:11:37 -0800 (PST) In-Reply-To: <568639680.3058139.1481142668538.JavaMail.zimbra@redhat.com> References: <9a42d447-4c87-1ba5-2240-8a775ab39bd8@gmail.com> <36a776ae-cc98-1b87-f728-57226cf7a18d@gmail.com> <568639680.3058139.1481142668538.JavaMail.zimbra@redhat.com> From: Christopher Shannon Date: Wed, 7 Dec 2016 16:11:37 -0500 Message-ID: Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component removal and potential 2.0.0 To: dev@activemq.apache.org Content-Type: multipart/alternative; boundary=001a114bc9d635bc00054317f9e5 archived-at: Wed, 07 Dec 2016 21:13:31 -0000 --001a114bc9d635bc00054317f9e5 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable +1 for merging the branch into master after the cleanup is done and bumping to 2.0 since it is a major architecture change. On Wed, Dec 7, 2016 at 3:31 PM, Justin Bertram wrote: > Essential feature parity with 5.x (where it makes sense) has been a goal > all along, but I think waiting until such parity exists before the next > major release means the community will be waiting quite a bit longer than > they already have. Meanwhile, new functionality that could benefit the > community will remain unavailable. In any event, "feature parity" is a b= it > vague. If there is something specific with regards to 5.x parity that > you're looking for then I think you should make that explicit so it can b= e > evaluated. > > I'm in favor of merging the addressing changes onto master, hardening > things up a bit, and then releasing. > > > Justin > > ----- Original Message ----- > From: "Matt Pavlovich" > To: dev@activemq.apache.org > Sent: Wednesday, December 7, 2016 2:04:13 PM > Subject: Re: [DISCUSS] Artemis addressing improvements, JMS component > removal and potential 2.0.0 > > IMHO, I think it would be good to kick up a thread on what it means to > be 2.0. It sounds like the addressing changes definitely warrant it on > its own, but I'm thinking having ActiveMQ 5.x feature parity would be a > good goal for the 2.0 release. My $0.02 > > On 12/7/16 2:56 PM, Clebert Suconic wrote: > > +1000 > > > > > > It needs one final cleanup before it can be done though.. these commit > > messages need meaninful descriptions. > > > > if Justin or Martyn could come up with that since they did most of the > > work on the branch. > > > > This will really require bumping the release to 2.0.0 (there's a > > 2.0.snapshot commit on it already). I would merge this into master, > > and fork the current master as 1.x. > > > > > > > > > > On Wed, Dec 7, 2016 at 1:52 PM, Timothy Bish > wrote: > >> This would be a good time to move to master, would allow other to more > >> easily get onboard > >> > >> > >> On 12/07/2016 01:25 PM, Clebert Suconic wrote: > >>> I have rebased ARTEMIS-780 in top of master. There was a lot of > >>> conflicts... > >>> > >>> I have aggregated/squashed most of the commits by chronological order > >>> almost. So if Martyn had 10 commits in series I had squashed all of > >>> them, since they were small comments anyways. The good thing about > >>> this is that nobody would lose authorship of these commits. > >>> > >>> We will need to come up with more meaningful messages for these > >>> commits before we can merge into master. But this is getting into a > >>> very good shape. I'm impressed by the amount of work I see done on > >>> this branch. Very well done guys! I mean it! > >>> > >>> Also, I have saved the old branch before I pushed -f into my fork as > >>> old-ARTEMIS-780 in case I broke anything on the process. Please check > >>> everything and let me know if I did. > >>> > >>> > >>> And please rebase more often on this branch unless you merge it soon. > >>> > >>> > >>> On Mon, Nov 28, 2016 at 2:36 PM, Clebert Suconic > >>> wrote: > >>>> If / when we do the 2.0 bump, I would like to move a few classes. > >>>> Mainly under server.impl... I would like to move activations under a > >>>> package for activation, replicationendpoints for a package for > >>>> replications... some small stuff like that just to reorganize > >>>> little things like this a bit. > >>>> > >>>> We can't do that now as that would break API and compatibility, but = if > >>>> we do the bump, I would like to make that simple move. > >>>> > >>>> On Thu, Nov 24, 2016 at 4:41 AM, Martyn Taylor > >>>> wrote: > >>>>> Hi Matt, > >>>>> > >>>>> Comments inline. > >>>>> > >>>>> On Mon, Nov 21, 2016 at 7:02 PM, Matt Pavlovich > >>>>> wrote: > >>>>> > >>>>>> Martyn- > >>>>>> > >>>>>> I think you nailed it here-- well done =3D) > >>>>>> > >>>>>> My notes in-line-- > >>>>>> > >>>>>> On 11/21/16 10:45 AM, Martyn Taylor wrote: > >>>>>> > >>>>>>> 1. Ability to route messages to queues with the same address, but > >>>>>>> different > >>>>>>> routing semantics. > >>>>>>> > >>>>>>> The proposal outlined in ARTEMIS-780 outlines a new model that > >>>>>>> introduces > >>>>>>> an address object at the configuration and management layer. In t= he > >>>>>>> proposal it is not possible to create 2 addresses with different > >>>>>>> routing > >>>>>>> types. This causes a problem with existing clients (JMS, STOMP an= d > for > >>>>>>> compatability with other vendors). > >>>>>>> > >>>>>>> Potential Modification: Addresses can have multiple routing type > >>>>>>> =E2=80=9Cendpoints=E2=80=9D, either =E2=80=9Cmulticast=E2=80=9D o= nly, =E2=80=9Canycast=E2=80=9D only or both. The > >>>>>>> example > >>>>>>> below would be used to represent a JMS Topic called =E2=80=9Cfoo= =E2=80=9D, with a > >>>>>>> single > >>>>>>> subscription queue and a JMS Queue called =E2=80=9Cfoo=E2=80=9D. = N.B. The actual > XML > >>>>>>> is > >>>>>>> just an example, there are multiple ways this could be represente= d > >>>>>>> that we > >>>>>>> can define later. > >>>>>>> > >>>>>>> <*addresses*> <*address* *name**=3D"foo"*> <*anycast*> > >>>>>>> <*queues*> <*queue* *name**=3D"**foo=E2=80=9D* /> > > >>>>>>> <*mulcast*> <*queues*> > >>>>>>> <*queue* *name**=3D"my.topic.subscription" */> > >>>>>>> > >>>>>>> > >>>>>> I think this solves it. The crux of the issues (for me) boils down > to > >>>>>> auto-creation of destinations across protocols. Having this show u= p > in > >>>>>> the > >>>>>> configs would give developers and admins more information to > >>>>>> troubleshoot > >>>>>> the mixed address type+protocol scenario. > >>>>>> > >>>>>> 2. Sending to =E2=80=9Cmulticast=E2=80=9D, =E2=80=9Canycast=E2=80= =9D or =E2=80=9Call=E2=80=9D > >>>>>>> As mentioned earlier JMS (and other clients such as STOMP via > >>>>>>> prefixing) > >>>>>>> allow the producer to identify the type of end point it would lik= e > to > >>>>>>> send > >>>>>>> to. > >>>>>>> > >>>>>>> If a JMS client creates a producer and passes in a topic with > address > >>>>>>> =E2=80=9Cfoo=E2=80=9D. Then only the queues associated with the = =E2=80=9Cmulticast=E2=80=9D > section of > >>>>>>> the > >>>>>>> address. A similar thing happens when the JMS producer sends to a > >>>>>>> =E2=80=9Cqueue=E2=80=9D > >>>>>>> messages should be distributed amongst the queues associated with > the > >>>>>>> =E2=80=9Canycast=E2=80=9D section of the address. > >>>>>>> > >>>>>>> There may also be a case when a producer does not identify the > >>>>>>> endpoint > >>>>>>> type, and simply sends to =E2=80=9Cfoo=E2=80=9D. AMQP or MQTT may= want to do this. > In > >>>>>>> this > >>>>>>> scenario both should happen. All the queues under the multicast > >>>>>>> section > >>>>>>> get > >>>>>>> a copy of the message, and one queue under the anycast section ge= ts > >>>>>>> the > >>>>>>> message. > >>>>>>> > >>>>>>> Modification: None Needed. Internal APIs would need to be updated > to > >>>>>>> allow > >>>>>>> this functionality. > >>>>>>> > >>>>>> I think the "deliver to all" scenario should be fine. This seems > >>>>>> analogous > >>>>>> to a CompositeDestination in ActiveMQ 5.x. I'll map through some > >>>>>> scenarios > >>>>>> and report back any gotchas. > >>>>>> > >>>>>> 3. Support for prefixes to identify endpoint types > >>>>>>> Many clients, ActiveMQ 5.x, STOMP and potential clients from > alternate > >>>>>>> vendors, identify the endpoint type (in producer and consumer) > using a > >>>>>>> prefix notation. > >>>>>>> > >>>>>>> e.g. queue:///foo > >>>>>>> > >>>>>>> Which would identify: > >>>>>>> > >>>>>>> <*addresses*> <*address* *name**=3D"foo"*> <*anycast*> > >>>>>>> <*queues*> <*queue* *name**=3D"my.foo.queue" */> > >>>>>>> > >>>>>>> > >>>>>>> Modifications Needed: None to the model. An additional parameter = to > >>>>>>> the > >>>>>>> acceptors should be added to identify the prefix. > >>>>>>> > >>>>>> Just as a check point in the syntax+naming convention in your > provided > >>>>>> example... would the name actually be: > >>>>>> > >>>>>> <*queue* *name**=3D"foo" .. vs "my.foo.queue" ? > >>>>>> > >>>>> The queue name can be anything. It's the address that is used by > >>>>> consumer/producer. The protocol handler / broker will decided whic= h > >>>>> queue > >>>>> to connect to. > >>>>> > >>>>>> 4. Multiple endpoints are defined, but client does not specify > >>>>>> =E2=80=9Cendpoint > >>>>>>> routing type=E2=80=9D when consuming > >>>>>>> > >>>>>>> Handling cases where consumers does not pass enough information i= n > >>>>>>> their > >>>>>>> address or via protocol specific mechanisms to identify an > endpoint. > >>>>>>> Let=E2=80=99s > >>>>>>> say an AMQP client, requests to subscribe to the address =E2=80= =9Cfoo=E2=80=9D, but > >>>>>>> passes > >>>>>>> no extra information. In the cases where there are only a single > >>>>>>> endpoint > >>>>>>> type defined, the consumer would associated with that endpoint > type. > >>>>>>> However, when both endpoint types are defined, the protocol handl= er > >>>>>>> does > >>>>>>> not know whether to associate this consumer with a queue under th= e > >>>>>>> =E2=80=9Canycast=E2=80=9D section, or whether to create a new que= ue under the > >>>>>>> =E2=80=9Cmulticast=E2=80=9D > >>>>>>> section. e.g. > >>>>>>> > >>>>>>> Consume: =E2=80=9Cfoo=E2=80=9D > >>>>>>> > >>>>>>> <*addresses*> > >>>>>>> > >>>>>>> <*address* *name**=3D"foo"*> <*anycast*> > <*queues*> > >>>>>>> <*queue* *name**=3D"**foo=E2=80=9D* /> > >>>>>>> <*multicast*> <*queues*> > <*queue* > >>>>>>> *name**=3D"my.topic.subscription" */> > >>>>>>> > >>>>>>> > >>>>>>> In this scenario, we can make the default configurable on the > >>>>>>> protocol/acceptor. Possible options for this could be: > >>>>>>> > >>>>>>> =E2=80=9Cmulticast=E2=80=9D: Defaults multicast > >>>>>>> > >>>>>>> =E2=80=9Canycast=E2=80=9D: Defaults to anycast > >>>>>>> > >>>>>>> =E2=80=9Cerror=E2=80=9D: Returns an error to the client > >>>>>>> > >>>>>>> Alternatively each protocol handler could handle this in the most > >>>>>>> sensible > >>>>>>> way for that protocol. MQTT might default to =E2=80=9Cmulticast= =E2=80=9D, STOMP > >>>>>>> =E2=80=9Canycast=E2=80=9D, > >>>>>>> and AMQP to =E2=80=9Cerror=E2=80=9D. > >>>>>>> > >>>>>> Yep, this works great. I think there are two flags on the > acceptors.. > >>>>>> one > >>>>>> for auto-create and one for default handling of name collision. Th= e > >>>>>> defaults would most likely be the same. > >>>>>> > >>>>>> Something along the lines of: > >>>>>> auto-create-default =3D "multicast | anycast" > >>>>>> no-prefix-default =3D "multicast | anycast | error" > >>>>>> > >>>>>> 5. Fully qualified address names > >>>>>>> This feature allows a client to identify a particular address on = a > >>>>>>> specific > >>>>>>> broker in a cluster. This could be achieved by the client using > some > >>>>>>> form > >>>>>>> of address as: > >>>>>>> > >>>>>>> queue:///host/broker/address/ > >>>>>>> > >>>>>>> Matt could you elaborate on the drivers behind this requirement > >>>>>>> please. > >>>>>>> > >>>>>>> I am of the opinion that this is out of the scope of the addressi= ng > >>>>>>> changes, and is more to do with redirecting in cluster scenarios. > The > >>>>>>> current model will support this address syntax if we want to use > it in > >>>>>>> the > >>>>>>> future. > >>>>>>> > >>>>>> I agree that tackling the impl of this should be out-of-scope. My > >>>>>> recommendation is to consider it in addressing now, so we can > hopefully > >>>>>> avoid any breakage down the road. > >>>>>> > >>>>>> A widely used feature in other EMS brokers (IBM MQ, Tibco EMS, etc= ) > is > >>>>>> the > >>>>>> ability to fully address a destination using a format similar to > this: > >>>>>> > >>>>>> queue://brokerB/myQueue > >>>>>> > >>>>> The advantage of this is to allow for scaling of the number of > >>>>> destinations > >>>>>> and allows for more dynamic broker networks to be created without > >>>>>> applications having to have connection information for all brokers > in a > >>>>>> broker network. Think simple delivery+routing, and not horizontal > >>>>>> scaling. > >>>>>> It is very analogous to SMTP mail routing. > >>>>>> > >>>>>> Producer behavior: > >>>>>> > >>>>>> 1. Client X connects to brokerA and sends it a message addressed: > >>>>>> queue://brokerB/myQueue > >>>>>> 2. brokerA accepts the message on behalf of brokerB and handles al= l > >>>>>> acknowledgement and persistence accordingly > >>>>>> 3. brokerA would then store the message in a "queue" for brokerB. > Note: > >>>>>> All messages for brokerB are generally stored in one queue-- this = is > >>>>>> how it > >>>>>> helps with destination scaling > >>>>>> > >>>>>> Broker to broker behavior: > >>>>>> > >>>>>> There are generally two scenarios: always-on or periodic-check > >>>>>> > >>>>>> In "always-on" > >>>>>> 1. brokerA looks for a brokerB in its list of cluster connections > and > >>>>>> then > >>>>>> sends all messages for all queues for brokerB (or brokerB pulls al= l > >>>>>> messages, depending on cluster connection config) > >>>>>> > >>>>>> In "periodic-check" > >>>>>> 1. brokerB connects to brokerA (or vice-versa) on a given time > interval > >>>>>> and then receives any messages that have arrived since last check > >>>>>> > >>>>>> TL;DR; > >>>>>> > >>>>>> It would be cool to consider remote broker delivery for messages > while > >>>>>> refactoring the address handling code. This would bring Artemis > inline > >>>>>> with > >>>>>> the rest of the commercial EMS brokers. The impact now, hopefully, > is > >>>>>> minor > >>>>>> and just thinking about default prefixes. > >>>>>> > >>>>> Understood, from our conversations on IRC I can see why this might = be > >>>>> useful. > >>>>> > >>>>>> Thanks, > >>>>>> -Matt > >>>>>> > >>>>>> > >>>>>> > >>>> > >>>> -- > >>>> Clebert Suconic > >>> > >>> > >> > >> -- > >> Tim Bish > >> twitter: @tabish121 > >> blog: http://timbish.blogspot.com/ > >> > > > > > > --001a114bc9d635bc00054317f9e5--