activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gary Tully (Resolved) (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (AMQ-3568) Consumer auto acking of duplicate message dispatch can lead to Unmatched acknowledge: and redelivery
Date Thu, 27 Oct 2011 10:28:32 GMT

     [ https://issues.apache.org/jira/browse/AMQ-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Gary Tully resolved AMQ-3568.
-----------------------------

    Resolution: Fixed

fix in http://svn.apache.org/viewvc?rev=1189700&view=rev

Added a warning when this occurs as it is an indication that duplicate suppression using the
store producer audit is lacking. This sort of duplicate should be identified at source. the
source is a failover reconnect with a pending send that happens after the original message
is dispatched and after the cursor and producer audit windows are exceeded.

                
> Consumer auto acking of duplicate message dispatch can lead to Unmatched acknowledge:
and redelivery
> ----------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-3568
>                 URL: https://issues.apache.org/jira/browse/AMQ-3568
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker, JMS client
>    Affects Versions: 5.5.0, 5.5.1
>            Reporter: Gary Tully
>            Assignee: Gary Tully
>              Labels: dispatch, duplicate, failover
>             Fix For: 5.6.0
>
>
> {code}javax.jms.JMSException: Unmatched acknowledge: MessageAck {commandId = 4208, responseRequired
= false, ackType = 2, consumerId = ID:gtmbp.local-35153-1319651042567-3:2:1:1, firstMessageId
= ID:gtmbp.local-35153-1319651042567-3:2:1:975:2, lastMessageId = ID:gtmbp.local-35153-1319651042567-3:2:1:1050:2,
destination = queue://TestQueue, transactionId = null, messageCount = 151, poisonCause = null};
Expected message count (151) differs from count in dispatched-list (152)
> 	at org.apache.activemq.broker.region.PrefetchSubscription.assertAckMatchesDispatched(PrefetchSubscription.java:455)
> 	at org.apache.activemq.broker.region.PrefetchSubscription.acknowledge(PrefetchSubscription.java:206)
> 	at org.apache.activemq.broker.region.AbstractRegion.acknowledge(AbstractRegion.java:427)
> 	at org.apache.activemq.broker.region.RegionBroker.acknowledge(RegionBroker.java:569)
> 	at org.apache.activemq.broker.BrokerFilter.acknowledge(BrokerFilter.java:77)
> 	at org.apache.activemq.broker.TransactionBroker.acknowledge(TransactionBroker.java:276)
> 	at org.apache.activemq.broker.MutableBrokerFilter.acknowledge(MutableBrokerFilter.java:87)
> 	at org.apache.activemq.broker.MutableBrokerFilter.acknowledge(MutableBrokerFilter.java:87)
> 	at org.apache.activemq.broker.TransportConnection.processMessageAck(TransportConnection.java:477)
> 	at org.apache.activemq.command.MessageAck.visit(MessageAck.java:229)
> 	at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:318)
> 	at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:181)
> 	at org.apache.activemq.transport.MutexTransport.onCommand(MutexTransport.java:50)
> 	at org.apache.activemq.transport.WireFormatNegotiator.onCommand(WireFormatNegotiator.java:113)
> 	at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(AbstractInactivityMonitor.java:229)
> 	at org.apache.activemq.transport.TransportSupport.doConsume(TransportSupport.java:83)
> 	at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:222){code}
> Problem occurs when a duplicate dispatch is one of many inflight messages to a destination.
The duplicate detection auto acks with a standard ack in place of an individual ack. The standard
ack results in an exception in this case because it does not match the dispatch list of the
broker. optimizeAcknowledge on the connection factory seems to make this more probable. The
duplicate originates from a failover: recovery/reconnect resend.
> The end result is pending messages on the queue and redelivery after a restart.
> In some cases, the need for duplicate detection can be circumvented at source via the
kahaDB store producer audit, the default LRU cache size is 64. Increasing this can help.{code}<kahaDB
... maxFailoverProducersToTrack="2048" />{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message