activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From denis_the_user <>
Subject General Design Help
Date Tue, 10 Nov 2009 14:58:15 GMT


I'm looking for a high-performance solution for message throughput of about
10.000 messages per second.
I think thats no problem at all.

My System:
- Some multi-core systems
- ActiveMQ 5.3
- Apache tomcat + Java Webapp handling Producers
- Java tool handling Consumers and delivering to the target system.
- Messages of typ Add/Update/Delete

But I have some limiting facts to take care about:

1) I need to make sure that no message is lost.
For that case i did some testing with redundant brokers on different hosts.
If one dies the other takes over, no prblem, but still loosing messages. I'm
using failover-configuration within Java-API for producer and consumer. API
works fine. But I think the activeMQ server uses transaction to persist data
to the MYSQL-Cluster and if I kill one Broker (using kill -9 <pid>) the
transaction is not done and the messages are lost. Using just kill <pid> no
message is lost, but some times there is one more than I sent.

I hope that behaviour is just a configuration fault I did.

I'm using just two little configuration for the factory I used to create
some Producer:
ActiveMQConnectionFactory connectionFactory = new
ActiveMQConnectionFactory(user, password, url);

My Session uses:

Per message I run the following code:
if (transacted) this.session.getSession().commit();

I know that I create one new producer per Message. Sure could change that.
But why are messages lost on broker fault?

2) The second fact I need to care for: The order of the Messages is
importened. Reason: One Message tells the system to update, the other
message to delete. If the entry is deleted before updating system throws
Is there any possibility for multiple brokers to take care of producing
order? UseCase: One Server slows down and the other one deliveres normal.
Messages got mixed up and Consuming system runs into error.
I'm sure I could just use one broker and ensure the order, but that's
perhaps to slow.

For more information my broker configuration:
<beans xmlns=""


    <broker xmlns=""
brokerName="localhost" useJmx="true">

			<journaledJDBC dataDirectory="${activemq.base}/data"
            <transportConnector name="openwire" uri="tcp://"/>
                    <policyEntry queue=">" producerFlowControl="true"
                          <individualDeadLetterStrategy queuePrefix="DLQ."
useQueueForQueueMessages="true" />
                    <policyEntry topic=">" producerFlowControl="true"

            <managementContext createConnector="true"/>
            <systemUsage sendFailIfNoSpace="true">
                    <memoryUsage limit="1024 mb"/>
                    <storeUsage limit="2 gb" name="foo"/>
                    <tempUsage limit="1000 mb"/>

	<bean id="mysql-ds" class="org.apache.commons.dbcp.BasicDataSource"
		<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
		<property name="url"
		<property name="username" value="activemq"/>
		<property name="password" value="activepwd"/>
		<property name="maxActive" value="200"/>
		<property name="poolPreparedStatements" value="true"/>
	<jetty xmlns="">
            <nioConnector port="8161"/>

            <webAppContext contextPath="/admin"
resourceBase="${activemq.base}/webapps/admin" logUrlOnStart="true"/>
            <webAppContext contextPath="/fileserver"
resourceBase="${activemq.base}/webapps/fileserver" logUrlOnStart="true"/>


I hope some one can help me. Or spend me some good advice for cluster design
and a configuration example.

View this message in context:
Sent from the ActiveMQ - User mailing list archive at

View raw message