Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 00F12E6E2 for ; Fri, 18 Jan 2013 19:00:14 +0000 (UTC) Received: (qmail 63511 invoked by uid 500); 18 Jan 2013 19:00:13 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 63481 invoked by uid 500); 18 Jan 2013 19:00:13 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 63472 invoked by uid 99); 18 Jan 2013 19:00:13 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 18 Jan 2013 19:00:13 +0000 Date: Fri, 18 Jan 2013 19:00:13 +0000 (UTC) From: "Elliott Clark (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-5458) Thread safety issues with Compression.Algorithm.GZ and CompressionTest MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-5458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557478#comment-13557478 ] Elliott Clark commented on HBASE-5458: -------------------------------------- Yeah you're correct. every block not every file. > Thread safety issues with Compression.Algorithm.GZ and CompressionTest > ---------------------------------------------------------------------- > > Key: HBASE-5458 > URL: https://issues.apache.org/jira/browse/HBASE-5458 > Project: HBase > Issue Type: Bug > Components: io > Affects Versions: 0.90.5, 0.92.2, 0.96.0, 0.94.4 > Reporter: David McIntosh > Assignee: Elliott Clark > Priority: Minor > Attachments: HBASE-5458-090-0.patch, HBASE-5458-090-1.patch, HBASE-5458-090-2.patch, HBASE-5458-092-2.patch, HBASE-5458-094-2.patch, HBASE-5458-trunk-2.patch > > > I've seen some occasional NullPointerExceptions in ZlibFactory.isNativeZlibLoaded(conf) during region server startups and the completebulkload process. This is being caused by a null configuration getting passed to the isNativeZlibLoaded method. I think this happens when 2 or more threads call the CompressionTest.testCompression method at once. If the GZ algorithm has not been tested yet both threads could continue on and attempt to load the compressor. For GZ the getCodec method is not thread safe which could lead to one thread getting a reference to a GzipCodec that has a null configuration. > {code} > current: > DefaultCodec getCodec(Configuration conf) { > if (codec == null) { > codec = new GzipCodec(); > codec.setConf(new Configuration(conf)); > } > return codec; > } > {code} > one possible fix would be something like this: > {code} > DefaultCodec getCodec(Configuration conf) { > if (codec == null) { > GzipCodec gzip = new GzipCodec(); > gzip.setConf(new Configuration(conf)); > codec = gzip; > } > return codec; > } > {code} > But that may not be totally safe without some synchronization. An upstream fix in CompressionTest could also prevent multi thread access to GZ.getCodec(conf) > exceptions: > 12/02/21 16:11:56 ERROR handler.OpenRegionHandler: Failed open of region=all-monthly,,1326263896983.bf574519a95263ec23a2bad9f5b8cbf4. > java.io.IOException: java.lang.NullPointerException > at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:89) > at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:2670) > at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2659) > at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2647) > at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:312) > at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:99) > at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:158) > at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > Caused by: java.lang.NullPointerException > at org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63) > at org.apache.hadoop.io.compress.GzipCodec.getCompressorType(GzipCodec.java:166) > at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100) > at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112) > at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:236) > at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:84) > ... 9 more > Caused by: java.io.IOException: java.lang.NullPointerException > at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:89) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:890) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:819) > at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.groupOrSplit(LoadIncrementalHFiles.java:405) > at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:323) > at org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$2.call(LoadIncrementalHFiles.java:321) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > Caused by: java.lang.NullPointerException > at org.apache.hadoop.io.compress.zlib.ZlibFactory.isNativeZlibLoaded(ZlibFactory.java:63) > at org.apache.hadoop.io.compress.GzipCodec.getCompressorType(GzipCodec.java:166) > at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100) > at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:112) > at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:236) > at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:84) > ... 10 more -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira