hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-11042) TestForceCacheImportantBlocks OOMs occasionally in 0.94
Date Mon, 21 Apr 2014 02:22:15 GMT

    [ https://issues.apache.org/jira/browse/HBASE-11042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13975352#comment-13975352
] 

Hudson commented on HBASE-11042:
--------------------------------

FAILURE: Integrated in HBase-0.94-on-Hadoop-2 #80 (See [https://builds.apache.org/job/HBase-0.94-on-Hadoop-2/80/])
HBASE-11042 TestForceCacheImportantBlocks OOMs occasionally in 0.94. (larsh: rev 1588841)
* /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/io/hfile/TestForceCacheImportantBlocks.java


> TestForceCacheImportantBlocks OOMs occasionally in 0.94
> -------------------------------------------------------
>
>                 Key: HBASE-11042
>                 URL: https://issues.apache.org/jira/browse/HBASE-11042
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>            Assignee: Lars Hofhansl
>             Fix For: 0.94.19
>
>         Attachments: 11042-0.94.txt
>
>
> This trace:
> {code}
> Caused by: java.lang.OutOfMemoryError
> 	at java.util.zip.Deflater.init(Native Method)
> 	at java.util.zip.Deflater.<init>(Deflater.java:169)
> 	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:91)
> 	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:110)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.<init>(ReusableStreamGzipCodec.java:79)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.<init>(ReusableStreamGzipCodec.java:90)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130)
> 	at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101)
> 	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:299)
> 	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:283)
> 	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:207)
> 	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:356)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1330)
> 	at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:913)
> {code}
> Note that is caused specifically by HFileWriteV1 when using compression. It looks like
the compression resources are not released.
> Not sure it's worth fixing this at this point. The test can be fixed by either not using
compression (why are we using compression anyway), or by not testing for HFileV1.
> [~stack] it seems you know the the code in HFileWriterV1. Do you want to have a look?
Maybe there is a quick fix in HFileWriterV1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message