hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-11042) TestForceCacheImportantBlocks OOMs occasionally in 0.94
Date Sun, 20 Apr 2014 23:31:16 GMT

    [ https://issues.apache.org/jira/browse/HBASE-11042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13975320#comment-13975320
] 

stack commented on HBASE-11042:
-------------------------------

bq. stack it seems you know the the code in HFileWriterV1. Do you want to have a look?

No.  v1 is dead.  Go ahead w/ your patch.

> TestForceCacheImportantBlocks OOMs occasionally in 0.94
> -------------------------------------------------------
>
>                 Key: HBASE-11042
>                 URL: https://issues.apache.org/jira/browse/HBASE-11042
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Lars Hofhansl
>             Fix For: 0.94.19
>
>         Attachments: 11042-0.94.txt
>
>
> This trace:
> {code}
> Caused by: java.lang.OutOfMemoryError
> 	at java.util.zip.Deflater.init(Native Method)
> 	at java.util.zip.Deflater.<init>(Deflater.java:169)
> 	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:91)
> 	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:110)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.<init>(ReusableStreamGzipCodec.java:79)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.<init>(ReusableStreamGzipCodec.java:90)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130)
> 	at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101)
> 	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:299)
> 	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:283)
> 	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:207)
> 	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:356)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1330)
> 	at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:913)
> {code}
> Note that is caused specifically by HFileWriteV1 when using compression. It looks like
the compression resources are not released.
> Not sure it's worth fixing this at this point. The test can be fixed by either not using
compression (why are we using compression anyway), or by not testing for HFileV1.
> [~stack] it seems you know the the code in HFileWriterV1. Do you want to have a look?
Maybe there is a quick fix in HFileWriterV1.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message