activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jahlborn <>
Subject Correctly configuring a network of brokers
Date Thu, 05 Nov 2015 21:02:52 GMT
So I've spent some time digging around the internet and experimenting with
setup, trying to determine the best configuration for a network of brokers.
There are a variety of things which seem to conspire against me: the
of configuration options present in activemq, options which seem to have no
"optimal" setting (each choice has different pros and cons), and
behavior/features which change over time such that old recommendations may
be irrelevant or no longer correct.  For reference, we are using a mesh
network of brokers, where consumers may arrive at any broker in the network.
We use topics, queues, and virtual queues.  We use explicit receive() calls
well as listeners.  We also utilize the "exclusive consumer" feature for
of our clustered consumers.  All messaging is currently durable.  Some
relevant configuration bits (changes from the defaults).

* using activemq 5.9.1
* advisory support is enabled
* PolicyEntry
** optimizedDispatch: true
* Using ConditionalNetworkBridgeFilterFactory
** replayWhenNoConsumers: true
** replayDelay: 1000
* NetworkConnector
** prefetchSize: 1
** duplex: false
** messageTTL: 9999

Alright, now that we got through all the context, here's the meat of the
question.  There are a bunch of configuration parameters (mostly focused in
the NetworkConnector), and it's not at all clear to me if my current
configuration is optimal.  For instance, although we had been using a
configuration similar to the above for about a year now (same as above
that messageTTL was 1), we only recently discovered that our exclusive
consumers can sometimes stop processing messages.  In certain cases, the
consumer would end up bouncing around and the messages would end up getting
stuck on one node.  Adding the messageTTL setting seems to fix the problem
it the right fix...?).

* NetworkConnector
** "dynamicOnly" - i've seen a couple of places mention enabling this and
   indication that it helps with scaling in a network of brokers (e.g. [3]).
   The description in [1] also makes it sound like something i would want to
   enable.  However, the value defaults to false, which seems to indicate
   there is a down-side to enabling it.  Why wouldn't i want to enable this?
** "decreaseNetworkConsumerPriority", "suppressDuplicateQueueSubscriptions"
   these params both seem like "damned if you do, damned if you don't" type
   parameters.  The first comment in [2] seems to imply that in order to
   scale, you really want to enable these features so that producers prefer
   pushing messages to local consumers (makes sense).  Yet, at the same
   it seems that enabling this feature will _decrease_ scalability in that
   won't evenly distribute messages in the case when there are multiple
   consumers (we use clusters of consumers in some scenarios).  Also in [2],
   there are some allusions to stuck messages if you don't enable this
   feature.  Should i enable these parameters?
** "networkTTL", "messageTTL", "consumerTTL" - until recently, we kept these
   at the defaults (1).  However, we recently realized that we can end up
   stuck messages with these settings.  I've seen a couple of places which
   recommend setting "networkTTL" to the number of brokers in the network
   (e.g. [2]), or at least something > 1.  However, the recommendation for
   "consumerTTL" on [1] is that this value should be 1 in a mesh network
   setting the "networkTTL" will set the "consumerTTL" as well).
   Additionally, [2] seems to imply that enabling
   "suppressDuplicateQueueSubscriptions" acts like "networkTTL" is 1 for
   messages (unsure what this means?).  We ended up setting only the
   "messageTTL" and this seemed to solve our immediate problem.  Unsure if
   will cause other problems...?
** "prefetchSize" - defaults to 1000, but I see recommendations that it
   be 1 for network connectors (e.g. [3]).  I think that in our initial
   testing i saw bad things happen with this setting and got more even load
   balancing by lowering it to 1.

I think that about summarizes my questions and confusion.  Any help would be


View this message in context:
Sent from the ActiveMQ - User mailing list archive at

View raw message