activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Bertram <jbert...@redhat.com>
Subject Re: [Artemis 2.1-2.3] Configuration Reload on slave broker.xml causes slave to start/enable acceptors which disables backups
Date Thu, 21 Sep 2017 19:16:53 GMT
> If I make any change at all to the slave broker.xml file the
"configuration reload" feature takes effect and starts/enables the
acceptors on the Slave.

How did you test that?  Looking at the code it appears the configuration
reload logic shouldn't touch the acceptors.  Also, I just tested this on a
simple replicated live/backup pair and when I updated a security-setting on
the backup the acceptors didn't activate and it continued backing up the
live broker as expected.

> is there a way to completely disable configuration reload all together?

Set <configuration-file-refresh-period> to a really high number.  This
won't completely disable it technically speaking, but will effectively
disable it.

> Can configuration reload be configured to also take into account address
and security configuration that has happened via the
management api?

Many of the changes made via the management API are volatile.  However,
adding queues should be persistent.  If reloading broker.xml causes queues
added via the management API to disappear I think that's likely a bug.

> is there a way to configure the configuration reload to consider the fact
that it is supposed to be part of a cluster?

I'd need to understand the problematic use-case better before commenting on
that further.


Justin

On Thu, Sep 21, 2017 at 1:43 PM, Dan Langford <danlangford@gmail.com> wrote:

> Quick Summary: If I make any change at all to the slave broker.xml file the
> "configuration reload" feature takes effect and starts/enables the
> acceptors on the Slave. This causes the slave to stop backing up the master
> and start accepting its own connections. also address and security settings
> that have been made via management api are lost and only the broker.xml
> file is considered. Im wondering if this is intended behavior, a config
> setting i need to change, or a possible bug. specific details and examples
> follow. also i erroneously created an issue with this already that, based
> on our findings, may need to be closed: ARTEMIS-1429
>
> ======
>
> NODE CONFIG
>
> I am running in a simple Master / Slave cluster. Each node is configured
> such that the cluster is defined with a static connector to the other.
> Start up looks fine and the Slave stops accepting connections and backup is
> announced.
>
> QUEUE CONFIG
>
> Lets set up a scenario here that demonstrates a few things. lets say that
> in the broker xml an address named FOO (anycast to a queue named FOO) is
> defined. Security settings also allow role MAVERICK to send and consume.
> Lets also say that after the system started via management operations we
> created another address named BAR (anycast to queue named BAR). We also at
> runtime added security settings to allow role GOOSE to send and consume
> both FOO and BAR
>
> *broker.xml*
> address FOO
> role MAVERICK send to FOO
>
> *runtime management*
> address BAR
> role GOOSE send to BAR
> role GOOSE send to FOO
>
> FAILOVER & FAILBACK WORKING
>
> so Master is "serving", if you will, FOO and BAR. GOOSE can send to both
> FOO and BAR. If we turn off Master then Slave starts listening on the
> acceptors and continues to serve FOO and BAR. The security settings also
> replicated so GOOSE can still send to FOO and BAR. replication is working
> fine. Start Master back up and Master takes over and the Slave turns off
> its acceptors. This is just as expected and it works great behind our
> F5/VIP which sees active pool members based off of who is accepting
> requests to 5672.
>
> PROBLEMS WITH CONFIGURATION RELOAD & BACKUPS
>
> If I make any change at all to the slave broker.xml file the "configuration
> reload" feature take effect and starts/enables the acceptors on the Slave.
> The Slave is only "serving" any queues that are defined in the broker.xml
> so in this case its only serving FOO. since our VIP now sees that another
> pool member is active it starts routing traffic to the slave. the slave can
> only take FOO traffic because we have auto-create of queues turned off. so
> BAR traffic that happens to go to the slave is denied. also Replication now
> seem problematic as the Slave is no longer backing up the Master and the
> messages now being sent to FOO on the Slave are not being backed up by
> anybody.
>
> In fact anything configured via management is no longer considered. GOOSE
> can no longer send to FOO. MAVERICK still can.
>
> QUESTIONS
>
> Is this by design? is there a way to completely disable configuration
> reload all together? Can configuration reload be configured to also take
> into account address and security configuration that has happened via the
> management api? is there a way to configure the configuration reload to
> consider the fact that it is supposed to be part of a cluster?
>
> i am completely open to this being a problem with my set up. i wanted to
> quickly throw this out there if i need to come back and supply broker XML
> files i can create some that use these examples. but maybe this is
> something that has been brought up before
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message