hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lars Hofhansl (Updated) (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-5435) TestForceCacheImportantBlocks fails with OutOfMemoryError
Date Wed, 21 Mar 2012 19:17:40 GMT

     [ https://issues.apache.org/jira/browse/HBASE-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Lars Hofhansl updated HBASE-5435:
---------------------------------

    Fix Version/s:     (was: 0.94.0)
                   0.96.0
    
> TestForceCacheImportantBlocks fails with OutOfMemoryError
> ---------------------------------------------------------
>
>                 Key: HBASE-5435
>                 URL: https://issues.apache.org/jira/browse/HBASE-5435
>             Project: HBase
>          Issue Type: Test
>            Reporter: Zhihong Yu
>             Fix For: 0.96.0
>
>
> Here is related stack trace (see https://builds.apache.org/job/HBase-TRUNK/2665/testReport/org.apache.hadoop.hbase.io.hfile/TestForceCacheImportantBlocks/testCacheBlocks_1_/):
> {code}
> Caused by: java.lang.OutOfMemoryError
> 	at java.util.zip.Deflater.init(Native Method)
> 	at java.util.zip.Deflater.<init>(Deflater.java:124)
> 	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:46)
> 	at java.util.zip.GZIPOutputStream.<init>(GZIPOutputStream.java:58)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream$ResetableGZIPOutputStream.<init>(ReusableStreamGzipCodec.java:79)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec$ReusableGzipOutputStream.<init>(ReusableStreamGzipCodec.java:90)
> 	at org.apache.hadoop.hbase.io.hfile.ReusableStreamGzipCodec.createOutputStream(ReusableStreamGzipCodec.java:130)
> 	at org.apache.hadoop.io.compress.GzipCodec.createOutputStream(GzipCodec.java:101)
> 	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createPlainCompressionStream(Compression.java:239)
> 	at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.createCompressionStream(Compression.java:223)
> 	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.getCompressingStream(HFileWriterV1.java:270)
> 	at org.apache.hadoop.hbase.io.hfile.HFileWriterV1.close(HFileWriterV1.java:416)
> 	at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:1115)
> 	at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:706)
> 	at org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:633)
> 	at org.apache.hadoop.hbase.regionserver.Store.access$400(Store.java:106)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message