activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Tully <gary.tu...@gmail.com>
Subject Re: General Design Help
Date Wed, 11 Nov 2009 16:41:57 GMT
to explain the extra message, you would need to be using trackMessages
on the failover transport or transactions for the producer.

With a sync send, each message send is complete when the broker
responds to the send indicatiing that the message is safely stored.
In a transaction, the transaction is complete when a response to the
commit message is received.

If the broker dies with an inflight commit message, such that it has
committed the transaction but not yet replied to the client, the sync
commit is in trouble. It can either fail or failover.

In the failover case, the failover transport contains a request cache
that will recreate the session and transaction and retry the send,
essentially doing a resend on a reconnect. This will result in a
duplicate message send or replay of the transaction.
In the case where the failover goes to a new broker, it is happy to
accept the transaction as new and accept the message.

---
re producer flow control, the broker should be able to recover and
resume accepting messages when space is again available. Reclaiming
space may take a little while as the store is reclaimed in unused
blocks. Configuring the data file size and checkpoint interval would
help with reclamation.
Alternatively you could use vmcursors that will block on memory usage
rather than disk usage.

2009/11/11 denis_the_user <denis.ehrich@b-s-s.de>:
>
> Thanks a lot.
> Now I do not longer loose Messages if I kill one Broker. But I'm not sure
> why there is one message more than I send to the Queue.
>
> UseCase:
>
> 0) Preparing: Start broker_1 at host_1 and broker_2 at host_2. Both pointing
> to the same MySQL Connection at host_1.
>
> 1) At first I start a Java tool sending 20.000 (exactly) messages to my
> broker_1 at localhost.
>
> 2) Wire jetty-Webfrontend I watch growing Queue-Size. After 3000 messages in
> queue I execute kill -9 to broker_1. Broker_2 takes over.
>
> 3) After some seconds all messages are received by the broker.
> Jetty-Webfrontend shows 20.001 Messages at my queue.
>
> 4) Executing to my DB: select count(*) from ACTIVEMQ_MSGS; and got 20.001
> messages.
>
> Is this a known issue? Could some one help me understanding that and perhaps
> solving this problem to make sure that there is no message delivered two
> times?
>
> Also I realized an other problem while running a lot of messages against
> active MQ. After message overflow (not more storage space exception) I got
> problems accessing the broker. Are there some recovering tasks to do after
> overflow?
>
> Exception:
> INFO | Usage Manager Store is Full. Stopping producer
> (ID:test.system.info-54693-1257848336902-0:0:13634:1) to prevent flooding
> queue://myqueue. See http://activemq.apache.org/producer-flow-control.html
> for more info
>
> Do I have to restart MQ after that fault? I like to go on feeding messages
> after some more messages are consumed and persistance space is freed.
>
>
>
> Gary Tully wrote:
>>
>> for 1) don't use journaled jdbc in a failover setup because the
>> journal is not replicated. Use simple jdbc or revert to a shared file
>> system setup.
>>
>> 2) producer order cannot be maintained across brokers. If it is vital
>> you need a single broker or partition your data across destinations.
>>
>> 2009/11/10 denis_the_user <denis.ehrich@b-s-s.de>:
>>>
>>> Hey.
>>>
>>> I'm looking for a high-performance solution for message throughput of
>>> about
>>> 10.000 messages per second.
>>> I think thats no problem at all.
>>>
>>> My System:
>>> - Some multi-core systems
>>> - ActiveMQ 5.3
>>> - Apache tomcat + Java Webapp handling Producers
>>> - Java tool handling Consumers and delivering to the target system.
>>> - Messages of typ Add/Update/Delete
>>>
>>> But I have some limiting facts to take care about:
>>>
>>> 1) I need to make sure that no message is lost.
>>> For that case i did some testing with redundant brokers on different
>>> hosts.
>>> If one dies the other takes over, no prblem, but still loosing messages.
>>> I'm
>>> using failover-configuration within Java-API for producer and consumer.
>>> API
>>> works fine. But I think the activeMQ server uses transaction to persist
>>> data
>>> to the MYSQL-Cluster and if I kill one Broker (using kill -9 <pid>) the
>>> transaction is not done and the messages are lost. Using just kill <pid>
>>> no
>>> message is lost, but some times there is one more than I sent.
>>>
>>> I hope that behaviour is just a configuration fault I did.
>>>
>>> I'm using just two little configuration for the factory I used to create
>>> some Producer:
>>> ActiveMQConnectionFactory connectionFactory = new
>>> ActiveMQConnectionFactory(user, password, url);
>>> connectionFactory.setAlwaysSessionAsync(false);
>>> connectionFactory.setAlwaysSyncSend(true);
>>>
>>> My Session uses:
>>> Session.AUTO_ACKNOWLEDGE
>>>
>>> Per message I run the following code:
>>> this.session.getProducer().send(message);
>>> if (transacted) this.session.getSession().commit();
>>> this.session.close();
>>>
>>> I know that I create one new producer per Message. Sure could change
>>> that.
>>> But why are messages lost on broker fault?
>>>
>>> 2) The second fact I need to care for: The order of the Messages is
>>> importened. Reason: One Message tells the system to update, the other
>>> message to delete. If the entry is deleted before updating system throws
>>> error.
>>> Is there any possibility for multiple brokers to take care of producing
>>> order? UseCase: One Server slows down and the other one deliveres normal.
>>> Messages got mixed up and Consuming system runs into error.
>>> I'm sure I could just use one broker and ensure the order, but that's
>>> perhaps to slow.
>>>
>>> For more information my broker configuration:
>>> <beans xmlns="http://www.springframework.org/schema/beans"
>>>  xmlns:amq="http://activemq.apache.org/schema/core"
>>>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>>>  xsi:schemaLocation="http://www.springframework.org/schema/beans
>>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>>>  http://activemq.apache.org/schema/core
>>> http://activemq.apache.org/schema/core/activemq-core.xsd">
>>>
>>>    <bean
>>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"/>
>>>
>>>    <broker xmlns="http://activemq.apache.org/schema/core"
>>> brokerName="localhost" useJmx="true">
>>>
>>>        <persistenceAdapter>
>>>                        <journaledJDBC
>>> dataDirectory="${activemq.base}/data"
>>> dataSource="#mysql-ds"/>
>>>        </persistenceAdapter>
>>>
>>>        <transportConnectors>
>>>            <transportConnector name="openwire"
>>> uri="tcp://0.0.0.0:61616"/>
>>>        </transportConnectors>
>>>
>>>                <destinationPolicy>
>>>            <policyMap>
>>>                <policyEntries>
>>>                    <policyEntry queue=">" producerFlowControl="true"
>>> memoryLimit="20mb">
>>>                        <deadLetterStrategy>
>>>                          <individualDeadLetterStrategy queuePrefix="DLQ."
>>> useQueueForQueueMessages="true" />
>>>                        </deadLetterStrategy>
>>>                    </policyEntry>
>>>                    <policyEntry topic=">" producerFlowControl="true"
>>> memoryLimit="20mb">
>>>                    </policyEntry>
>>>                </policyEntries>
>>>            </policyMap>
>>>        </destinationPolicy>
>>>
>>>        <managementContext>
>>>            <managementContext createConnector="true"/>
>>>        </managementContext>
>>>
>>>                <systemUsage>
>>>            <systemUsage sendFailIfNoSpace="true">
>>>                <memoryUsage>
>>>                    <memoryUsage limit="1024 mb"/>
>>>                </memoryUsage>
>>>                <storeUsage>
>>>                    <storeUsage limit="2 gb" name="foo"/>
>>>                </storeUsage>
>>>                <tempUsage>
>>>                    <tempUsage limit="1000 mb"/>
>>>                </tempUsage>
>>>            </systemUsage>
>>>        </systemUsage>
>>>
>>>    </broker>
>>>
>>>        <bean id="mysql-ds"
>>> class="org.apache.commons.dbcp.BasicDataSource"
>>> destroy-method="close">
>>>                <property name="driverClassName"
>>> value="com.mysql.jdbc.Driver"/>
>>>                <property name="url"
>>> value="jdbc:mysql://localhost/activemq?relaxAutoCommit=true"/>
>>>                <property name="username" value="activemq"/>
>>>                <property name="password" value="activepwd"/>
>>>                <property name="maxActive" value="200"/>
>>>                <property name="poolPreparedStatements" value="true"/>
>>>        </bean>
>>>
>>>        <jetty xmlns="http://mortbay.com/schemas/jetty/1.0">
>>>        <connectors>
>>>            <nioConnector port="8161"/>
>>>        </connectors>
>>>
>>>        <handlers>
>>>            <webAppContext contextPath="/admin"
>>> resourceBase="${activemq.base}/webapps/admin" logUrlOnStart="true"/>
>>>            <webAppContext contextPath="/fileserver"
>>> resourceBase="${activemq.base}/webapps/fileserver" logUrlOnStart="true"/>
>>>        </handlers>
>>>    </jetty>
>>>
>>> </beans>
>>>
>>> I hope some one can help me. Or spend me some good advice for cluster
>>> design
>>> and a configuration example.
>>>
>>> --
>>> View this message in context:
>>> http://old.nabble.com/General-Design-Help-tp26284819p26284819.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>
>>>
>>
>>
>>
>> --
>> http://blog.garytully.com
>>
>> Open Source Integration
>> http://fusesource.com
>>
>>
>
> --
> View this message in context: http://old.nabble.com/General-Design-Help-tp26284819p26302581.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>



-- 
http://blog.garytully.com

Open Source Integration
http://fusesource.com

Mime
View raw message