Return-Path: X-Original-To: apmail-activemq-dev-archive@www.apache.org Delivered-To: apmail-activemq-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3CD96F708 for ; Tue, 30 Apr 2013 05:50:16 +0000 (UTC) Received: (qmail 63990 invoked by uid 500); 30 Apr 2013 05:50:16 -0000 Delivered-To: apmail-activemq-dev-archive@activemq.apache.org Received: (qmail 63957 invoked by uid 500); 30 Apr 2013 05:50:16 -0000 Mailing-List: contact dev-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list dev@activemq.apache.org Received: (qmail 63942 invoked by uid 99); 30 Apr 2013 05:50:15 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 30 Apr 2013 05:50:15 +0000 Date: Tue, 30 Apr 2013 05:50:15 +0000 (UTC) From: "Daniel Marbach (JIRA)" To: dev@activemq.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (AMQNET-434) FailoverTransport Memory Leak with TransactionState MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/AMQNET-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13645293#comment-13645293 ] Daniel Marbach commented on AMQNET-434: --------------------------------------- We found the root cause of the problem. Will add a patch > FailoverTransport Memory Leak with TransactionState > --------------------------------------------------- > > Key: AMQNET-434 > URL: https://issues.apache.org/jira/browse/AMQNET-434 > Project: ActiveMQ .Net > Issue Type: Bug > Affects Versions: 1.5.6 > Reporter: Daniel Marbach > Assignee: Timothy Bish > Fix For: 1.6.0 > > Attachments: ConnectionStateTrackerMemoryLeak.cs > > > I'm hunting down a possible memory leak. We have the following problem in production: > when the consumer/subscriber endpoint runs for a long time with failover transport enabled the memory grows indefinitely. > I used YouTrack and AntsProfiler to hunt down the issues. The retention path I see in production is the following: > The FailoverTransport nested class FailoverTask has two ConnectionStateTrackers this keeps a dictionary which links the ConnectionId to the ConnectionState. The ConnectionState itself has a dictionary which links the transactionId to the TransactionState. The TranscationState tracks commands. BUT these commands are never freed up from the transaction state and stay there forever which will blow up the memory some time. > I'm currently investigation how to fix this but must first properly understand the code. I opened up this issue in the hope that it will ring a bell for you guys. > Daniel -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira