Return-Path: X-Original-To: apmail-camel-issues-archive@minotaur.apache.org Delivered-To: apmail-camel-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 04EFBDE59 for ; Thu, 4 Oct 2012 20:53:51 +0000 (UTC) Received: (qmail 41060 invoked by uid 500); 4 Oct 2012 20:53:49 -0000 Delivered-To: apmail-camel-issues-archive@camel.apache.org Received: (qmail 40989 invoked by uid 500); 4 Oct 2012 20:53:48 -0000 Mailing-List: contact issues-help@camel.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@camel.apache.org Delivered-To: mailing list issues@camel.apache.org Received: (qmail 40831 invoked by uid 99); 4 Oct 2012 20:53:48 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 04 Oct 2012 20:53:48 +0000 Date: Fri, 5 Oct 2012 07:53:48 +1100 (NCT) From: "Raul Kripalani (JIRA)" To: issues@camel.apache.org Message-ID: <181885949.1804.1349384028791.JavaMail.jiratomcat@arcas> In-Reply-To: <1207918980.159739.1349285888093.JavaMail.jiratomcat@arcas> Subject: [jira] [Commented] (CAMEL-5683) JMS connection leak with request/reply producer on temporary queues MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CAMEL-5683?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1346= 9681#comment-13469681 ]=20 Raul Kripalani commented on CAMEL-5683: --------------------------------------- Michael, Many thanks for such a detailed description, test case and bug report! Have you tried setting the size of the ProducerCache to zero? Check [1] for= instructions on how to do this. Beware I haven't tested it, it's just a su= ggestion for a workaround. If you have static endpoint URIs, then I don't t= hink you should experience any churn or performance hit by having a non-exi= stent ProducerCache.=20 Regards, Ra=C3=BAl. [1] http://camel.apache.org/how-do-i-configure-the-default-maximum-cache-si= ze-for-producercache-or-producertemplate.html =20 > JMS connection leak with request/reply producer on temporary queues > ------------------------------------------------------------------- > > Key: CAMEL-5683 > URL: https://issues.apache.org/jira/browse/CAMEL-5683 > Project: Camel > Issue Type: Bug > Components: camel-jms > Affects Versions: 2.10.0 > Environment: Apache Camel 2.10.0 > ActiveMQ 5.6.0 > Spring 3.2.1.RELEASE > Java 1.6.0_27 > SunOS HOST 5.10 Generic_144488-09 sun4v sparc SUNW,SPARC-Enterprise-T5220 > Reporter: Michael Pilone > Attachments: CamelConnectionLeak.zip, Consumer List.txt, MAT Snap= shot.png, Route Configuration.txt > > > Over time I see the number of temporary queues in ActiveMQ slowly climb. = Using JMX information and memory dumps in MAT, I believe the cause is a con= nection leak in Apache Camel. > My environment contains 2 ActiveMQ brokers in a network of brokers config= uration. There are about 15 separate applications which use Apache Camel to= connect to the broker using the ActiveMQ/JMS component. The various applic= ations have different load profiles and route configurations. > In the more active client applications, I found that ActiveMQ was listing= 300+ consumers when, based on my configuration, I would expect no more tha= n 75. The vast majority of the consumers are sitting on a temporary queue. = Over time, the 300 number increments by one or two over about a 4 hour peri= od. > I did a memory dump on one of the more active client applications and fou= nd about 275 DefaultMessageListenerContainers. Using MAT, I can see that so= me of the containers are referenced by JmsProducers in the ProducerCache; h= owever I can also see a large number of listener containers that are no lon= ger being referenced at all. I was also able to match up a soft-references = producer/listener endpoint with an unreferenced listener which means a seco= nd producer was created at some point. > Looking through the ProducerCache code, it looks like the LRU cache uses = soft-references to producers, in my case a JmsProducer. This seems problema= tic for two reasons: > - If memory gets constrained and the GC cleans up a producer, it is never= properly stopped. > - If the cache gets full and the map removes the LRU producer, it is neve= r properly stopped. > What I believe is happening, is that my application is sending a few requ= est/reply messages to a JmsProducer. The producer creates a TemporaryReplyM= anager which creates a DefaultMessageListenerContainer. At some point, the = JmsProducer is claimed by the GC (either via the soft-reference or because = the cache is full) and the reply manager is never stopped. This causes the = listener container to continue to listen on the temporary queue, consuming = local resources and more importantly, consuming resources on the JMS broker= . > I haven't had a chance to write an application to reproduce this behavior= , but I will attach one of my route configurations and a screenshot of the = MAT analysis looking at DefaultMessageListenerContainers. If needed, I coul= d provide the entire memory dump for analysis (although I rather not post i= t publicly). The leak depends on memory usage or producer count in the clie= nt application because the ProducerCache must have some churn. Like I said,= in our production system we see about 12 temporary queues abandoned per cl= ient per day. > Unless I'm missing something, it looks like the producer cache would need= to be much smarter to support stopping a producer when the soft-reference = is reclaimed or a member of the cache is ejected from the LRU list. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrato= rs For more information on JIRA, see: http://www.atlassian.com/software/jira