Return-Path: Delivered-To: apmail-hbase-user-archive@www.apache.org Received: (qmail 50148 invoked from network); 13 Jan 2011 00:44:59 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 13 Jan 2011 00:44:59 -0000 Received: (qmail 27069 invoked by uid 500); 13 Jan 2011 00:44:58 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 27001 invoked by uid 500); 13 Jan 2011 00:44:58 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 26993 invoked by uid 99); 13 Jan 2011 00:44:58 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Jan 2011 00:44:58 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [93.94.224.195] (HELO owa.exchange-login.net) (93.94.224.195) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Jan 2011 00:44:51 +0000 Received: from HC2.hosted.exchange-login.net (93.94.224.201) by edge2.hosted.exchange-login.net (93.94.224.195) with Microsoft SMTP Server (TLS) id 14.0.722.0; Thu, 13 Jan 2011 01:42:45 +0100 Received: from owa.exchange-login.net (([2002:5d5e:e0c9::5d5e:e0c9])) by owa.exchange-login.net (([2002:5d5e:e0c9::5d5e:e0c9])) with ShadowRedundancy id 14.0.722.0; Thu, 13 Jan 2011 00:42:14 +0000 Received: from MBX1.hosted.exchange-login.net ([fe80::a957:8775:7bf4:6581]) by hc2.hosted.exchange-login.net ([2002:5d5e:e0c9::5d5e:e0c9]) with mapi; Wed, 12 Jan 2011 22:20:57 +0100 From: Friso van Vollenhoven To: "" Subject: Re: Java Commited Virtual Memory significally larged then Heap Memory Thread-Topic: Java Commited Virtual Memory significally larged then Heap Memory Thread-Index: AQHLp/fjsSancgzbfEyXgMd6ROgvlZPL3vQAgAAEloCAAAMsAIAAFeiAgABGZQCAAKU5gIAAJ86AgAAOGwCAAA6TAIAADtQAgAADwQCAAHgoAIAAJQoA Date: Wed, 12 Jan 2011 21:20:56 +0000 Message-ID: <310F9349-EA38-405A-86B9-B4582939FD7F@xebia.com> References: <61DF20EF-BF0A-4160-B06F-A4E8B9745B6D@xebia.com> <4D2C88AB.4000706@mozilla.com> <6B358FAD-B718-4904-8DB4-7BBDA50F54D9@xebia.com> <85972BC7-7919-4978-9F33-5A1DAC2F21EC@gmail.com> <2BF42DA7-AEFB-40D9-B0A8-0C555F0941DE@xebia.com> In-Reply-To: Accept-Language: nl-NL, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: Content-Type: text/plain; charset="us-ascii" Content-ID: <64c61817-5b81-44ab-95f8-d7e82ec15ca5> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Hi, My guess is indeed that it has to do with using the reinit() method on comp= ressors and making them long lived instead of throwaway together with the L= ZO implementation of reinit(), which magically causes NIO buffer objects no= t to be finalized and as a result not release their native allocations. It'= s just theory and I haven't had the time to properly verify this (unfortuna= tely, I spend most of my time writing application code), but Todd said he w= ill be looking into it further. I browsed the LZO code to see what was goin= g on there, but with my limited knowledge of the HBase code it would be bal= d to say that this is for sure the case. It would be my first direction of = investigation. I would add some logging to the LZO code where new direct by= te buffers are created to log how often that happens and what size they are= and then redo the workload that shows the leak. Together with some profili= ng you should be able to see how long it takes for these get finalized. Cheers, Friso On 12 jan 2011, at 20:08, Stack wrote: > 2011/1/12 Friso van Vollenhoven : >> No, I haven't. But the Hadoop (mapreduce) LZO compression is not the pro= blem. Compressing the map output using LZO works just fine. The problem is = HBase LZO compression. The region server process is the one with the memory= leak... >>=20 >=20 > (Sorry for dumb question Friso) But HBase is leaking because we make > use of the Compression API in a manner that produces leaks? > Thanks, > St.Ack