Return-Path: X-Original-To: apmail-activemq-dev-archive@www.apache.org Delivered-To: apmail-activemq-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D3BCE10C2E for ; Fri, 17 Jan 2014 00:47:25 +0000 (UTC) Received: (qmail 93833 invoked by uid 500); 17 Jan 2014 00:47:21 -0000 Delivered-To: apmail-activemq-dev-archive@activemq.apache.org Received: (qmail 93719 invoked by uid 500); 17 Jan 2014 00:47:21 -0000 Mailing-List: contact dev-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@activemq.apache.org Delivered-To: mailing list dev@activemq.apache.org Received: (qmail 93699 invoked by uid 99); 17 Jan 2014 00:47:20 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 17 Jan 2014 00:47:20 +0000 Date: Fri, 17 Jan 2014 00:47:20 +0000 (UTC) From: "Timothy Bish (JIRA)" To: dev@activemq.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (AMQ-4977) Memory leak in ConnectionStateTracker when browsing non-empty queues MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/AMQ-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Bish updated AMQ-4977: ------------------------------ Priority: Major (was: Critical) > Memory leak in ConnectionStateTracker when browsing non-empty queues > -------------------------------------------------------------------- > > Key: AMQ-4977 > URL: https://issues.apache.org/jira/browse/AMQ-4977 > Project: ActiveMQ > Issue Type: Bug > Affects Versions: 5.6.0, 5.7.0, 5.8.0, 5.9.0 > Reporter: Georgi Danov > > I think I found case that is not handled by the fix AMQ-3316. We see memory leaks connected to this bug and after good amount of headbanging I think the problem is in how two methods work together - processMessagePull and trackBack. The first one has this piece of code: > {code} > // leave a single instance in the cache > final String id = pull.getDestination() + "::" + pull.getConsumerId(); > messageCache.put(id.intern(), pull); > {code} > while the second one unconditionally increases the currentCacheSize - regardless if the previous method added or replaced entry in the cache. > The situation where entries will be replaced (and not added) and the currentCacheSize will grow very fast until it wraps around and becomes negative is the following: > * have some logic that frequently creates queue browser and iterates through all the entries > * have the queue most of the time with at least one message. The more messages in the queue the faster currentCacheSize grows. > The reason is that processMessagePull reuses the consumer and destination ID for each browsed message in the queue when org.apache.activemq.ActiveMQQueueBrowser#hasMoreElements is invoked. trackBack is ignorant of this and keeps adding to the size regardless that the cache size stays the same. > Here is log from reproducing the issue as a proof: > {code} > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 10000, 5 elements, pending scans:10, memory: 8951KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 20000, 10 elements, pending scans:10, memory: 10845KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 30000, 15 elements, pending scans:10, memory: 12645KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 40000, 20 elements, pending scans:10, memory: 10363KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 50000, 25 elements, pending scans:10, memory: 12169KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 60000, 30 elements, pending scans:10, memory: 9852KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 70000, 35 elements, pending scans:10, memory: 11657KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 80000, 40 elements, pending scans:10, memory: 9401KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 90000, 45 elements, pending scans:10, memory: 11222KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 100000, 50 elements, pending scans:10, memory: 13047KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 110000, 55 elements, pending scans:10, memory: 10767KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 120000, 60 elements, pending scans:10, memory: 12567KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 130000, 65 elements, pending scans:10, memory: 10256KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 138800, 67 elements, pending scans:10, memory: 12085KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 146800, 67 elements, pending scans:10, memory: 9745KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 154800, 67 elements, pending scans:10, memory: 11566KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 162800, 67 elements, pending scans:10, memory: 9225KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 170800, 67 elements, pending scans:10, memory: 11013KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 178800, 67 elements, pending scans:10, memory: 12812KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 186800, 67 elements, pending scans:10, memory: 10522KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 194800, 67 elements, pending scans:10, memory: 12328KB > 2014-01-16 23:05:14 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 202800, 67 elements, pending scans:10, memory: 9999KB > 2014-01-16 23:05:15 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 210800, 67 elements, pending scans:10, memory: 11805KB > 2014-01-16 23:05:15 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 218800, 67 elements, pending scans:10, memory: 9496KB > 2014-01-16 23:05:15 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 226800, 67 elements, pending scans:10, memory: 11316KB > 2014-01-16 23:05:15 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 234800, 67 elements, pending scans:10, memory: 13100KB > 2014-01-16 23:05:15 WARN ActiveMqMemoryLeakTest - MaxCacheSize: 131072, currentCacheSize: 242800, 67 elements, pending scans:10, memory: 10754KB > {code} > One can see that the garbage collection works well (cache does not grow) but the currentCacheSize keeps increasing > Unfortunately I cannot easily extract the original code and provide it. Hope the explanation is clear enough. -- This message was sent by Atlassian JIRA (v6.1.5#6160)