activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rallavagu <rallav...@gmail.com>
Subject Re: ActiveMQ deployment
Date Wed, 02 Dec 2015 19:04:07 GMT
Nice. Thanks Raffi. This is helpful.

On 12/2/15 10:57 AM, Basmajian, Raffi wrote:
> NoB is not an alternative to Master/Slave because they solve different problems.
>
> As I said earlier, each broker exposes simple http service at http://broker:8181/health,
returning simple json detailing current master or slave. F5 performs GET request on these
http endpoints every 5s and searches for "master" in response, if it finds it, that broker
is considered available, otherwise, it removes it from the pool.
>
> And you can combine M/S with NoB without intermediate load-balancer, it just means all
clients need complex failover connection url containing all broker hosts; the advantage of
using LB/wide-IP is avoiding all this complexity at the client level.
>
> Hope that helps
> Raffi
>
> -----Original Message-----
> From: Rallavagu [mailto:rallavagu@gmail.com]
> Sent: Wednesday, December 02, 2015 1:34 PM
> To: users@activemq.apache.org
> Subject: Re: ActiveMQ deployment [ EXTERNAL ]
>
> Raffi, Thanks for the pointer.
>
> Completed a bit of reading and understanding the "updateClusterClients"
> option also read the blog
>
> http://bsnyderblog.blogspot.com/2010/10/new-features-in-activemq-54-automatic.html
>
> As per these options and the blog it appears to me that NOB with "updateClusterClients"
is potentially an alternative to master/slave deployment. However, it does not offer H/A solution
like master/slave does.
>
> As you mentioned in one of your previous response, you are actually leveraging F5 load
balancer to "talk" to master/slave. Is the F5 LB configured with master/slave JMS client?
Without an intermediate layer such as F5 LB I suppose we can't have a combination of Master/Slave
and NoB with the option of  "updateClusterClients".
>
> On 12/1/15 6:26 PM, Basmajian, Raffi wrote:
>> Hi Rallavagu,
>>
>> When using "failover:" from client, if the transport connector has
>> updateClusterClients="true", the clilent monitors changes in the
>> broker cluster, allowing the client to maintain a list of active
>> brokers to use for connection failover. We've tested this feature and
>> were very impressed at how well it works. We observed clients failing
>> over to new brokers seamlessly, and very fast, no exceptions thrown,
>> well, at least none propagated to application code which are the ones
>> we care about :-)
>>
>> This feature is supported for openwire transport only, not stomp, ws, amqp:
>>
>>     <transportConnectors>
>>       <transportConnector
>>                        name="openwire"
>>                        uri="tcp://0.0.0.0:61616"
>>                        updateClusterClients="true"/>
>>     </transportConnectors>
>>
>>
>> Full reference here
>> http://activemq.apache.org/failover-transport-reference.html
>>
>> Hope that helps
>> Raffi
>>
>> -----Original Message-----
>> From: Rallavagu [mailto:rallavagu@gmail.com]
>> Sent: Tuesday, December 01, 2015 7:33 PM
>> To: users@activemq.apache.org
>> Subject: Re: ActiveMQ deployment [ EXTERNAL ]
>>
>> Raffi, Thanks. This is interesting.
>>
>> What do you mean by "If connection fails, assuming transport connector is configured
to update client with cluster changes" as the client is configured with only "failover:(tcp://eventbus:61616)"?
>>
>>
>>
>> On 12/1/15 4:23 PM, Basmajian, Raffi wrote:
>>> That's exactly the configuration we're building; M/S pairs with NoB, connected
via complete graph.
>>>
>>> All clients connect using wide-IP "failover:(tcp://eventbus:61616)", that's it.
We did this for two reasons:
>>> 1) to avoid messy failover configuration on the client,
>>> 2) to avoid client-reconfig when topology is scaled out.
>>>
>>> Each broker has a special Http service that runs inside broker and queries local
JMX, responds with following JSON:
>>>
>>> {role:master}  or {role:slave}
>>>
>>> This makes it easy to implement heartbeat logic using hardware load-balancer,
like F5.
>>> F5 now pings each broker every 10s to determine which ones are active and which
are "master"; slaves and inactive nodes are removed from F5 pool.
>>> When client connects using "failover:(tcp://eventbus:61616)", DNS routes to F5
first, then F5 connects client to master broker in nearest datacenter; this is done for  initial
connection only.
>>> If connection fails, assuming transport connector is configured to update client
with cluster changes, the client will reconnect on its own; F5 does not handle that, which
is exactly what we wanted. Control initial connect to simplify client config, but leverage
ActiveMQ cluster aware clients library to manage connection failovers.
>>>
>>> Hope that helps,
>>>
>>> Raffi
>>>
>>>
>>> -----Original Message-----
>>> From: Rallavagu [mailto:rallavagu@gmail.com]
>>> Sent: Tuesday, December 01, 2015 2:57 PM
>>> To: users@activemq.apache.org
>>> Subject: Re: ActiveMQ deployment [ EXTERNAL ]
>>>
>>> Now, I am getting a clearer picture about the options. Essentially, NOB provides
load balancing while Master/Slave offers pure failover. In case I go with combination where
a Master/Slave cluster is configured with NOB with other Master/Slave cluster how would the
client failover configuration would work?
>>>
>>> Will a set of consumers always connect a one of the Master/Slave cluster? In
this case how would load balance work? Thanks.
>>>
>>> On 12/1/15 11:32 AM, Basmajian, Raffi wrote:
>>>> NoB forwards messages based on consumer demand, not for achieving failover.
>>>> You can get failover on the client using standalone brokers, just use failover:()
protocol from client.
>>>> Master/Slave is true failover.
>>>>
>>>> -----Original Message-----
>>>> From: Rallavagu [mailto:rallavagu@gmail.com]
>>>> Sent: Tuesday, December 01, 2015 1:06 PM
>>>> To: users@activemq.apache.org
>>>> Subject: Re: ActiveMQ deployment [ EXTERNAL ]
>>>>
>>>> Thanks again Johan. As the failover is configured at the client end how would
the configuration for combined deployment look like?
>>>>
>>>> I was thinking on the lines of NOB because the messages are
>>>> forwarded to other broker(s) thus achieving failover capabilities in
>>>> case the original broker is failed the duplicate messages are
>>>> available on second
>>>> (other) broker(s). Am I off in my assumption?
>>>>
>>>> On 12/1/15 9:35 AM, Johan Edstrom wrote:
>>>>> You want to combine them, the NOB is for communication but JMS is still
store and forward, i.e if a machine dies, you can have multiple paths, what was in the persistence
store of said machine is still "dead" until the machine is revived, that's where the Master
/ Slave(s) come in. They'll jump in and start playing that persistence store.
>>>>>
>>>>> /je
>>>>>
>>>>>> On Nov 30, 2015, at 10:57 PM, Rallavagu <rallavagu@gmail.com>
wrote:
>>>>>>
>>>>>> Thanks Johan.
>>>>>>
>>>>>> My goal is to achieve high availability (with failover) for producer
and consumer in addition to mitigate a situation of "there is a chance that one broker has
producers but no consumers".
>>>>>>
>>>>>> As per the documentation, it sounds like NOB is an option which can
offer failover and scalability. I was wondering if Master/Slave is the only option to achieve
high availability but it appears to me that NOB can offer the same. Wanted to check this with
folks here in this list if there is anything I am missing.
>>>>>>
>>>>>>
>>>>>> On 11/30/15 9:28 PM, Johan Edstrom wrote:
>>>>>>> What you probably want is a combination of HA and communication.
>>>>>>>
>>>>>>> HA I.e master and slave(s) (Depending on storage) gives you uptime.
>>>>>>> NOB gives you communication paths and as such scalability and
for some value of it versatility.
>>>>>>>
>>>>>>> You can also use the two above and combine that with bridges
to build small little scalable clouds that forward like say enterprise email systems.
>>>>>>>
>>>>>>> You can also go the completely different route and say that in
your Enterprise you only use central brokers for messages between systems of importance, then
you use local broker networks for message patterns, aggregation etc.
>>>>>>>
>>>>>>>
>>>>>>> There is no one solution here. If you have more specific questions
it might be easier for people here to help as we might have more associations possible?
>>>>>>>
>>>>>>> /je
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Nov 30, 2015, at 3:25 PM, Rallavagu <rallavagu@gmail.com>
wrote:
>>>>>>>>
>>>>>>>> After spending some time reading, with reference to the
>>>>>>>> following link,
>>>>>>>>
>>>>>>>> http://activemq.apache.org/clustering.html
>>>>>>>>
>>>>>>>> What I am trying to figure out is if it is possible to achieve
a cluster with fault tolerance deploying with "Networks of brokers" or should I consider "Master/Slave"
in addition to "Networks of brokers". Not sure how the hybrid deploying works. Any comments
would help. Thanks.
>>>>>>>>
>>>>>>>> On 11/25/15 11:13 AM, Rallavagu wrote:
>>>>>>>>> Any takers on this? Thanks.
>>>>>>>>>
>>>>>>>>> On 11/24/15 1:37 PM, Rallavagu wrote:
>>>>>>>>>> All,
>>>>>>>>>>
>>>>>>>>>> What is the recommended deployment architecture for
an enterprise?
>>>>>>>>>>
>>>>>>>>>> 1. Master/Slave with replicated Level DB
>>>>>>>>>> (http://activemq.apache.org/replicated-leveldb-store.html)
>>>>>>>>>>
>>>>>>>>>> 2. Network of Brokers for scalability
>>>>>>>>>>
>>>>>>>>>> 3. Hybrid
>>>>>>>>>>
>>>>>>>>>> In case of hybrid, is there a reference document
that I could use?
>>>>>>>>>> Thanks.
>>>>>>>
>>>>>
>>>>
>>>> This e-mail transmission may contain information that is proprietary, privileged
and/or confidential and is intended exclusively for the person(s) to whom it is addressed.
Any use, copying, retention or disclosure by any person other than the intended recipient
or the intended recipient's designees is strictly prohibited. If you are not the intended
recipient or their designee, please notify the sender immediately by return e-mail and delete
all copies. OppenheimerFunds may, at its sole discretion, monitor, review, retain and/or disclose
the content of all email communications.
>>>>

Mime
View raw message