qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Olivier Mallassi <olivier.malla...@gmail.com>
Subject Re: [C++ Broker][HA Queue Replication]
Date Mon, 04 Jan 2016 12:56:58 GMT
Again, thank you for all these details.

In my case, I think I would prefer the federation option because I am fine
with the fact the broker can "queue" and store messages backlog. Yet, let
me think about this ;)

Your remark regarding the unreliable side is interesting. In fact, I was
thinking about using distributed (not federated) brokers,  behind a couple
of dispatch routers (and implement distributed queues) but I may not even
need brokers. maybe a "dispatch router" acting as a relay would work
perfectly!


On Wed, Dec 30, 2015 at 7:09 PM, aconway <aconway@redhat.com> wrote:

> On Fri, 2015-12-18 at 09:24 +0100, Olivier Mallassi wrote:
> > Hi all
> >
> > Gordon, thx.
> >
> > Regarding your last question "What are you aiming to achieve with the
> > federation? Is it scaling beyond the capacity of a single broker?" I
> > would
> > say, "at term, yes"
> >
> > In fact, I have two main use-cases.
> > 1/ the first one must be reliable. In that case, persistence and
> > clustering
> > should be used. But it must also be able to scale. Federation gives
> > me a
> > way to scale the publisher side by sending events in any brokers
> > (round-robin load balancing) and behind the scene, the messages will
> > end up
> > in the right queue on the right broker (based on the bindings /
> > routing
> > key).
> > It should be ok if routing keys are filtering events enough.
>
> You should be able to do this with dispatch to a HA cluster also. I'm
> very interested in making sure dispatch works well in a HA environment
> so let me know of any issues you come across.
>
> >
> > 2/ the second one is unreliable but "low latency", must scale beyond
> > broker
> > capacity (almost sure), and HA. So you can loose message but the
> > distribution chain must stay up and continue broadcasting incoming
> > events.
> > In that case, I do not want clustering (cause I do not want to pay
> > the
> > price of replication), but the dispatch router can help me because it
> > gives
> > me distributed queues. (I agree it will help me with security/proxies
> > etc...)
> >
> > What are your thoughts on this?
>
> > On my side, clearly, the NFR (& figures) need to be clarified and
> > more perf
> > tests will be done.
> > I am yet figuring out how to play with the 4 dimensions (persistence,
> > clustering, federation, dispatch router) to build these channels, the
> > simplest way possible.
> >
>
> Dispatch: can help with scale, it does connection concentration and
> (very limited) buffering so you can funnel large numbers of clients to
> a single broker. The limited buffering can be considered a feature:
> clients trying to send to an overloaded broker via dispatch will
> experience flow-control back-pressure and slow down their sending till
> the broker acks messages.
>
> Federation: a lot like dispatch in that you can set up flexible
> mapping/routing rules. The big difference is a federated broker *does*
> take responsibility for messages and can be a persistent and/or HA
> "buffer" on the way to the final destination.
>
> The dispatch/federation trade off is: do you want your clients to be
> forced to slow down when the "real" back-end is overloaded and wait
> till it can handle more data, or do you want long "fire and forget"
> buffers so clients can drop messages and move on even if the system is
> busy? Buffers can help you get better throughput over short load spikes
> but sustained overload will cause growing queues, growing latency and
> generally bigger trouble before your clients start to back off.
>
>
> For your summary my "gut reaction" would be to try dispatch to a
> cluster or group of clusters initially for the reliable part. There is
> a "distributed queue" test in the dispatch codebase that shows how you
> can implement a "single queue" (from the sender and subscriber
> perspective) as a set of queues on separate brokers - each of which
> could be a cluster. I would add federation only if you have a need for
> the kind of buffering I mention above, otherwise the dispatch topology
> and horizontally adding clusters should cover a lot of scaling issues.
>
> For the unreliable side I would guess you could use dispatch alone - if
> it is ok to lose messages if nobody is listening, then that is exactly
> the default behavior for dispatch.
>
> Also the advantage of a dispatch backbone is you can change your choice
> of broker without any effect on the clients. Only the broker
> configuration would be different.
>
>
>
> > Cheers .
> >
> >
> >
> >
> >
> >
> > On Wed, Dec 16, 2015 at 1:02 PM, Gordon Sim <gsim@redhat.com> wrote:
> >
> > > On 12/15/2015 04:40 PM, Olivier Mallassi wrote:
> > >
> > > > Hi all
> > > >
> > > > I am still digging into the qpid technologies in order to better
> > > > understand
> > > > how all the pieces can be tied together.
> > > > switching to the C++ broker implementation, I am trying to
> > > > understand how
> > > > HA cluster and federation can work together and your feedback
> > > > would be
> > > > appreciated.
> > > >
> > > > AFAIU, if I mix Federation and HA clustering I can end up with a
> > > > deployment
> > > > like this one (hope the formatting will stay)
> > > >
> > > >
> > > > publisher (C++/Java) --> |  Federated Broker.1 Active + Passive |
> > > >                           |  Federated Broker.2 Active + Passive
> > > > | -->
> > > > Consumer
> > > >                           |  Federated Broker.3 Active + Passive
> > > > |
> > > >
> > > > So for a 3 nodes federated broker, I have 6 processes + 3 vip (or
> > > > something
> > > > to promote the passive as a new primary in case of failure).
> > > > which is
> > > > fine.
> > > >
> > > > I was then trying to reduce the number of processes using the HA
> > > > Queue
> > > > replication (which looks like a subpart of the HA module and just
> > > > replicate
> > > > the queue - and not the complete broker)
> > > > With this, I am able to have
> > > >
> > > > publisher (C++/Java) --> |  Federated Broker.1 Active
> > > >      |
> > > >                           |  Federated Broker.2 Active
> > > >      | --> Consumer
> > > >                           |  Federated Broker.3 Active  +
> > > > Consumer queue
> > > > replica |
> > > >
> > > >
> > > > then comes a couple of questions:
> > > > - are the queues asynchronously replicated (or sync replicated if
> > > > the
> > > > consumers has not yet consumed the message as explained in the
> > > > doc)?
> > > >
> > >
> > > The publisher will not get an acknowledgement for any message they
> > > publish
> > > until either it has been acknowledged by the passive backups or by
> > > the
> > > consuming client.
> > >
> > > - Is there a way to properly handle fail-over in that case? because
> > > from
> > > > the doc, and I can understand this, it does not look to be a good
> > > > idea to
> > > > write into the queue replica? but I do not see a way to promote
> > > > it as
> > > > master.
> > > >
> > >
> > > You should never write to- or consume from- a backup. There is a
> > > tool
> > > (qpid-ha) for manually promoting one of the backups to master. The
> > > usual
> > > (recommended) practice is to use something like pacemaker to
> > > control this.
> > > There is probably some example script or doc around this but I
> > > can't locate
> > > it at the moment. (Alan?)
> > >
> > > - Is this even the good approach?
> > > >
> > >
> > > What are you aiming to achieve with the federation? Is it scaling
> > > beyond
> > > the capacity of a single broker?
> > >
> > >
> > > -----------------------------------------------------------------
> > > ----
> > > To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> > > For additional commands, e-mail: users-help@qpid.apache.org
> > >
> > >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message