activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jahlborn <>
Subject Re: Correctly configuring a network of brokers
Date Wed, 11 Nov 2015 16:18:25 GMT
First, thank you so much for your detailed answers!  I have a few more
questions inline.

> * NetworkConnector
> > ** "dynamicOnly" - i've seen a couple of places mention enabling this
> and
> > some
> >    indication that it helps with scaling in a network of brokers (e.g.
> > [3]).
> >    The description in [1] also makes it sound like something i would
> want
> > to
> >    enable.  However, the value defaults to false, which seems to
> indicate
> > that
> >    there is a down-side to enabling it.  Why wouldn't i want to enable
> > this?
> >
> One major difference here is that with this setting disabled (the default)
> messages will stay on the broker to which they were first produced, while
> with it enabled they will go to the broker to which the durable subscriber
> was last connected (or at least, the last one in the route to the
> consumer,
> if the broker to which the consumer was actually connected has gone down
> since then).  There are at least two disadvantages to enabling it: 1) if
> the producer connects to an embedded broker, then those messages go
> offline
> when the producer goes offline and aren't available when the consumer
> reconnects, 2) it only takes filling one broker's store before you
> Producer
> Flow Control the producer (whereas with the default setting you have to
> fill every broker along the last route to the consumer before PFC kicks
> in), and 3) if you have a high-latency network link in the route from
> producer to consumer, you delay traversing it until the consumer
> reconnects, which means the consumer may experience more latency than it
> would otherwise need to.  So as with so many of these settings, the best
> configuration for you will depend on your situation.

I'm a little confused by 3).  is that the behavior if this feature is
(dynamicOnly: true) or disabled (dynamicOnly: false)?

> Also, default values are often the default because that's what they were
> when they were first introduced (to avoid breaking legacy configurations),
> not necessarily because that's the setting that's recommended for all
> users.  Default values do get changed when the old value is clearly not
> appropriate and the benefits of a change outweigh the inconvenience to
> legacy users, but when there's not a clear preference they usually get
> left
> alone, which is a little confusing to new users.

yep, definitely understand that.  which is one of the complications, because
don't always know if a default is a default because it's the best option or
because it's a backwards compatibility choice.

> > ** "networkTTL", "messageTTL", "consumerTTL" - until recently, we kept
> > these
> >    at the defaults (1).  However, we recently realized that we can end
> up
> > with
> >    stuck messages with these settings.  I've seen a couple of places
> which
> >    recommend setting "networkTTL" to the number of brokers in the
> network
> >    (e.g. [2]), or at least something > 1.  However, the recommendation
> for
> >    "consumerTTL" on [1] is that this value should be 1 in a mesh network
> > (and
> >    setting the "networkTTL" will set the "consumerTTL" as well).
> >    Additionally, [2] seems to imply that enabling
> >    "suppressDuplicateQueueSubscriptions" acts like "networkTTL" is 1 for
> > proxy
> >    messages (unsure what this means?).  We ended up setting only the
> >    "messageTTL" and this seemed to solve our immediate problem.  Unsure
> if
> > it
> >    will cause other problems...?
> >
> In a mesh (all brokers connected to each other), you only need a
> consumerTTL of 1, because you can get the advisory message to every other
> broker in one hop.  But in that same mesh, there's no guarantee that a
> single hop will get you to the broker where the consumer is, because the
> consumer might jump to another node in the mesh before consuming the
> message, which would then require another forward.  So in a mesh with
> decreaseNetworkConsumerPriorty you may need a messageTTL/networkTTL of 1 +
> [MAX # FORWARDS] or greater, where [MAX # FORWARDS] is the worst-case
> number of jumps a consumer might make between the time a message is
> produced and the time it is consumed.  In your case you've chosen 9999, so
> that allows 9998 consumer jumps, which should be more than adequate.

any idea why the "network of brokers" documentation [1] has the
for "consumerTTL" of "keep to 1 in a mesh"?

> > ** "prefetchSize" - defaults to 1000, but I see recommendations that it
> > should
> >    be 1 for network connectors (e.g. [3]).  I think that in our initial
> >    testing i saw bad things happen with this setting and got more even
> load
> >    balancing by lowering it to 1.
> >
> As I mentioned above, setting a small prefetch size is important for load
> balancing; if you allow a huge backlog of messages to buffer up for one
> consumer, the other consumers can't work on them even if they're sitting
> around idle.  I'd pick a value like 1, 3, 5, 10, etc.; something small
> relative to the number of messages you're likely to have pending at any
> one
> time.  (But note that the prefetch buffer can improve performance if you
> have messages that take a variable amount of time to process and sometimes
> the amount of time to process them is lower than the amount of time to
> transfer them between your brokers or from the broker to the consumer,
> such
> as with a high-latency network link.  This doesn't sound like your
> situation, but it's yet another case where the right setting depends on
> your situation.)

when you say "smaller prefetch buffer sizes", i assume you mean for _all_
consumers, not just the network connectors?  our product runs the activemq
brokers embedded within our application (so consumers are in the same jvm as
the brokers).  in this case, does the consumer prefetch size make much of a
difference in terms of raw speed of consumption (ignoring the load balancing
issue for a moment)?

> > [1]
> > [2]
> > [3]

View this message in context:
Sent from the ActiveMQ - User mailing list archive at

View raw message