Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4D9251163B for ; Mon, 21 Apr 2014 22:55:34 +0000 (UTC) Received: (qmail 49698 invoked by uid 500); 21 Apr 2014 22:55:23 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 49548 invoked by uid 500); 21 Apr 2014 22:55:20 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 49415 invoked by uid 99); 21 Apr 2014 22:55:18 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 21 Apr 2014 22:55:18 +0000 Date: Mon, 21 Apr 2014 22:55:18 +0000 (UTC) From: "Lars Hofhansl (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HBASE-11042) TestForceCacheImportantBlocks OOMs occasionally in 0.94 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-11042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-11042: ---------------------------------- Issue Type: Test (was: Bug) > TestForceCacheImportantBlocks OOMs occasionally in 0.94 > ------------------------------------------------------- > > Key: HBASE-11042 > URL: https://issues.apache.org/jira/browse/HBASE-11042 > Project: HBase > Issue Type: Test > Reporter: Lars Hofhansl > Assignee: Lars Hofhansl > Fix For: 0.94.19 > > Attachments: 11042-0.94.txt > > > This trace: > {code} > Caused by: java.lang.OutOfMemoryError > at java.util.zip.Deflater.init(Native Method) > at java.util.zip.Deflater.(Deflater.java:169) > at java.util.zip.GZIPOutputStream.(GZIPOutputStream.java:91) > at java.util.zip.GZIPOutputStream.(GZIPOutputStream.java:110) > at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.(ReusableStreamGzipCodec.java:79) > at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.(ReusableStreamGzipCodec.java:90) > at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130) > at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101) > at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:299) > at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:283) > at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:207) > at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:356) > at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1330) > at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:913) > {code} > Note that is caused specifically by HFileWriteV1 when using compression. It looks like the compression resources are not released. > Not sure it's worth fixing this at this point. The test can be fixed by either not using compression (why are we using compression anyway), or by not testing for HFileV1. > [~stack] it seems you know the the code in HFileWriterV1. Do you want to have a look? Maybe there is a quick fix in HFileWriterV1. -- This message was sent by Atlassian JIRA (v6.2#6252)