activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gary Tully (Created) (JIRA)" <>
Subject [jira] [Created] (AMQ-3568) Consumer auto acking of duplicate message dispatch can lead to Unmatched acknowledge: and redelivery
Date Wed, 26 Oct 2011 17:51:32 GMT
Consumer auto acking of duplicate message dispatch can lead to Unmatched acknowledge: and redelivery

                 Key: AMQ-3568
             Project: ActiveMQ
          Issue Type: Bug
          Components: Broker, JMS client
    Affects Versions: 5.5.1, 5.5.0
            Reporter: Gary Tully
            Assignee: Gary Tully
             Fix For: 5.6.0

{code}javax.jms.JMSException: Unmatched acknowledge: MessageAck {commandId = 4208, responseRequired
= false, ackType = 2, consumerId = ID:gtmbp.local-35153-1319651042567-3:2:1:1, firstMessageId
= ID:gtmbp.local-35153-1319651042567-3:2:1:975:2, lastMessageId = ID:gtmbp.local-35153-1319651042567-3:2:1:1050:2,
destination = queue://TestQueue, transactionId = null, messageCount = 151, poisonCause = null};
Expected message count (151) differs from count in dispatched-list (152)
	at org.apache.activemq.command.MessageAck.visit(
	at org.apache.activemq.transport.MutexTransport.onCommand(
	at org.apache.activemq.transport.WireFormatNegotiator.onCommand(
	at org.apache.activemq.transport.AbstractInactivityMonitor.onCommand(
	at org.apache.activemq.transport.TransportSupport.doConsume(
	at org.apache.activemq.transport.tcp.TcpTransport.doRun({code}

Problem occurs when a duplicate dispatch is one of many inflight messages to a destination.
The duplicate detection auto acks with a standard ack in place of an individual ack. The standard
ack results in an exception in this case because it does not match the dispatch list of the
broker. optimizeAcknowledge on the connection factory seems to make this more probable. The
duplicate originates from a failover: recovery/reconnect resend.

The end result is pending messages on the queue and redelivery after a restart.

In some cases, the need for duplicate detection can be circumvented at source via the kahaDB
store producer audit, the default LRU cache size is 64. Increasing this can help.{code}<kahaDB
... maxFailoverProducersToTrack="2048" />{code}

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message