activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From skomarla <>
Subject Re: Master/Slave JDBC persistence using embedded broker in JBoss
Date Mon, 28 Jul 2008 21:36:34 GMT

I did see that but I guess I was hoping for this to be a little more elegant. 
I feel that having to define all the failover URLs defeats the purpose of
in-vm protocol in the case of an embedded master slave.  

It would be have been nice if the in vm broker is started in a "slave" mode,
where it would forward the messages to the master.  Perhaps it was my
misunderstanding of ActiveMQ, but I thought broker discovery would have
helped achive this.  I can't use networked brokers because I want all the
nodes to share the same database.

Hans Bausewein wrote:
> First of all, I'm not sure whether I'm the right person to answer your
> questions.
> There may be more uses of ActiveMQ than I know of.
> I've deployed 5.1.0 and the development version in a clustered JBoss
> environment with pure master/slave and as a single broker (because of some
> issues). 
> skomarla wrote:
>> Hello,
>> I'm having trouble getting a master slave setup working using an embedded
>> broker in JBoss with JDBC persistence. I've done some searching to see if
>> there is a workaround, but the only thing i've seen seems to be to either
>> run master/slave separately, which is what I'm trying to avoid.
>> The overview of the architecture is this.  
>> 1) There will be multiple JBoss instances each hosting a spring based
>> application, which contain JMS listeners (using the spring's
>> org.springframework.jms.listener.DefaultMessageListenerContainer).  
>> 2) In order to keep the deployment as simple as possible, we have decided
>> to use the embedded activemq broker and have each node use the in vm
>> transport protocol.
>> 3) I'm using activemq 5.1.0, and so I don't have a problem with the slave
>> node completing its startup sequence.
>> My problem occurs when my spring based application deploys.  
>> - The master node completes its start up sequence and the application
>> deploys.  
>> - The slave node complete's startup sequence without blocking, but when
>> the application deploys, the DefaultMessageListenerContainer repeatedly
>> tries to connect to the broker.  This fails with the below exception.
>> (seemingly because the slave has not started its transport connectors
>> which it seems to only do so when it becomes a master)
> The documentation is quite clear:
> "Only the master broker starts up its transport connectors and so the
> clients can only connect to the master."
> If you want to know why, one of the designers/developers probably better
> answers that.
> I guess it's a lot easier to accept messages on one entry point only.
> Hans

View this message in context:
Sent from the ActiveMQ - User mailing list archive at

View raw message