Return-Path: Delivered-To: apmail-activemq-dev-archive@www.apache.org Received: (qmail 77168 invoked from network); 18 Mar 2009 18:13:11 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 18 Mar 2009 18:13:11 -0000 Received: (qmail 40900 invoked by uid 500); 18 Mar 2009 18:13:11 -0000 Delivered-To: apmail-activemq-dev-archive@activemq.apache.org Received: (qmail 40870 invoked by uid 500); 18 Mar 2009 18:13:10 -0000 Mailing-List: contact dev-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list dev@activemq.apache.org Received: (qmail 40859 invoked by uid 99); 18 Mar 2009 18:13:10 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Mar 2009 11:13:10 -0700 X-ASF-Spam-Status: No, hits=-1999.8 required=10.0 tests=ALL_TRUSTED,WHOIS_MYPRIVREG X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Mar 2009 18:13:03 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 13137234C48D for ; Wed, 18 Mar 2009 11:12:43 -0700 (PDT) Message-ID: <961281802.1237399963076.JavaMail.jira@brutus> Date: Wed, 18 Mar 2009 11:12:43 -0700 (PDT) From: "Torsten Mielke (JIRA)" To: dev@activemq.apache.org Subject: [jira] Issue Comment Edited: (AMQ-2009) Problem with message dispatch after a while In-Reply-To: <1503857571.1227225545425.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: ae95407df07c98740808b2ef9da0087c X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/activemq/browse/AMQ-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=50652#action_50652 ] Torsten Mielke edited comment on AMQ-2009 at 3/18/09 11:11 AM: --------------------------------------------------------------- Don't want to say anything about the bug itself but I took consumertest.zip and turned it into a Maven JUnit test (see attached testcase.zip) in order to investigate into this problem. This new testcase also reproduces the same behavior however I do not believe this testcase shows a bug. Let me explain why. Both consumers listen on the same queue. The first consumer only closes its session *after* the second consumer tried to receive the message that the second producer sent to the queue. So the first consumer is still active when the second consumer calls f_consumer.receive(5000); So what happens at runtime is that even though both consumers use a pull mode (by calling consumer.receive(5000); ) there is a default prefetch size (as this is not set explicitly) that is used for each consumer. The first consumer acked the first message so it is available to receive more messages (even though it does not actively call f_consumer.receive()). So when the second message appears on the queue, the broker sends it right to the *first consumer* where it stays in the prefetch queue until it either gets received by the consumer calling f_consumer.receive() or the session gets closed. If in the testcase you call {code:java} consumerTest.receiveMessage(dinges); {code} rather than {code:java} consumerTestNew.receiveMessage(dinges); {code} the message is received fine. So there are two ways to work around this: 1. have the first consumer close its session before the second consumer receives the message: {code:title=ConsumerProblemTest.java} ... consumerTest.close(); //create second consumer and read msg DurableConsumer consumerTestNew = new DurableConsumer(); consumerTestNew.init(); consumerTestNew.receiveMessage(dinges); consumerTestNew.close(); {code} 2. use a prefetch limit of 0 so messages to not get prefetched to consumers: {code:title=DurableConsumerjava} env.put(InitialContext.PROVIDER_URL, "tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=0"); {code} With any of the two changes the testcase succeeds. Give it a go. Simply run mvn test, it should fail out of the box. Then make these changes, the testcase will succeed. was (Author: tmielke): I took consumertest.zip and turned it into a Maven JUnit test (see attached testcase.zip) in order to investigate into this problem. This new testcase also reproduces the same behavior however I do not believe this is a bug. Let me explain why. Both consumers listen on the same queue. The first consumer only closes its session *after* the second consumer tried to receive the message that the second producer sent to the queue. So the first consumer is still active when the second consumer calls f_consumer.receive(5000); So what happens at runtime is that even though both consumers use a pull mode (by calling consumer.receive(5000); ) there is a default prefetch size (as this is not set explicitly) that is used for each consumer. The first consumer acked the first message so it is available to receive more messages (even though it does not actively call f_consumer.receive()). So when the second message appears on the queue, the broker sends it right to the *first consumer* where it stays in the prefetch queue until it either gets received by the consumer calling f_consumer.receive() or the session gets closed. If in the testcase you call {code:java} consumerTest.receiveMessage(dinges); {code} rather than {code:java} consumerTestNew.receiveMessage(dinges); {code} the message is received fine. So there are two ways to work around this: 1. have the first consumer close its session before the second consumer receives the message: {code:title=ConsumerProblemTest.java} ... consumerTest.close(); //create second consumer and read msg DurableConsumer consumerTestNew = new DurableConsumer(); consumerTestNew.init(); consumerTestNew.receiveMessage(dinges); consumerTestNew.close(); {code} 2. use a prefetch limit of 0 so messages to not get prefetched to consumers: {code:title=DurableConsumerjava} env.put(InitialContext.PROVIDER_URL, "tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=0"); {code} With any of the two changes the testcase succeeds. Give it a go. Simply run mvn test, it should fail out of the box. Then make these changes, the testcase will succeed. > Problem with message dispatch after a while > ------------------------------------------- > > Key: AMQ-2009 > URL: https://issues.apache.org/activemq/browse/AMQ-2009 > Project: ActiveMQ > Issue Type: Bug > Components: Broker > Affects Versions: 5.1.0, 5.2.0 > Reporter: Rajani Chennamaneni > Assignee: Rob Davies > Priority: Blocker > Attachments: consumertest.zip, DispatchMultipleConsumersTest.java, JConsole-screenshot.jpg, testcase.zip > > > Messages are not getting dispatched after a while (although it accepts new incoming messages) until restart of the broker. This problem is described in several posts. > http://www.nabble.com/Pending-Messages-are-shown-in-ActiveMQ-td20241332.html > http://www.nabble.com/Consumer-Listener-stop-receving-message-until-ActiveMQ-restart-td20355247.html > http://www.nabble.com/Stuck-messages---Dispatch-issues-td20467949.html > There was also an issue opened in Spring project for this thinking it was Spring problem. > http://jira.springframework.org/browse/SPR-5110 > I am not able to reproduce with Junit test case having BrokerService started with in the test case. I guess I am not hitting the right stress conditions this way. But when I run the test case against an externally running ActiveMQ instance backed with oracle database persistence, it is reproducible most of the times. This is not a every time failure situation, it takes more time once than the other. > I was able to hit this situation of stuck messages on queue using following scenario most of the times: > 1) Start 2 concurrent consumers for the queue using Spring's DefaultMessageListenerContainer using cacheLevelName as CACHE_CONSUMER > 2) Send messages using JMETER 2.3.2 to the queue on ActiveMQ stand alone broker instance with 50 threads looping 20 times. > 3) After a while, you will notice that Spring logs that no messages are being received but the messages are shown jconsole of ActiveMQ and the database backing it for persistence. > But in 5.2 RC3, the problem is that it dispatches duplicate messages and does not remove them from broker's database after acknowledge properly. > Attached test case might help to reproduce when run against externally running stand alone ActiveMQ broker. Another way to see the problem is that try to load test using JMETER by sending messages to a queue with a camel route that moves messages from this queue to another and you will notice that it stops moving after while or copied duplicates in case of 5.2 RC3. > Sorry about such a huge description but it is a real problem! A different team at our company are having this issue in production with 5.1. They are using it as an embedded broker with derby for persistence. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.