Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3FB17101B8 for ; Wed, 29 Jan 2014 12:52:21 +0000 (UTC) Received: (qmail 60759 invoked by uid 500); 29 Jan 2014 12:52:14 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 60558 invoked by uid 500); 29 Jan 2014 12:52:12 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 60430 invoked by uid 99); 29 Jan 2014 12:52:10 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 29 Jan 2014 12:52:10 +0000 Date: Wed, 29 Jan 2014 12:52:10 +0000 (UTC) From: "ramkrishna.s.vasudevan (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-10438) NPE from LRUDictionary when size reaches the max init value MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-10438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13885305#comment-13885305 ] ramkrishna.s.vasudevan commented on HBASE-10438: ------------------------------------------------ Enabling compress_tags and retrying this will be able to reproduce this issue. > NPE from LRUDictionary when size reaches the max init value > ----------------------------------------------------------- > > Key: HBASE-10438 > URL: https://issues.apache.org/jira/browse/HBASE-10438 > Project: HBase > Issue Type: Bug > Affects Versions: 0.98.0 > Reporter: ramkrishna.s.vasudevan > Assignee: ramkrishna.s.vasudevan > Priority: Blocker > Fix For: 0.98.0 > > > This happened while testing tags with COMPRESS_TAG=true/false. I was trying to change this attribute of compressing tags by altering the HCD. The DBE used is FAST_DIFF. > In one particular case I got this > {code} > 2014-01-29 16:20:03,023 ERROR [regionserver60020-smallCompactions-1390983591688] regionserver.CompactSplitThread: Compaction failed Request = regionName=usertable,user5146961419203824653,1390979618897.2dd477d0aed888c615a29356c0bbb19d., storeName=f1, fileCount=4, fileSize=498.6 M (226.0 M, 163.7 M, 67.0 M, 41.8 M), priority=6, time=1994941280334574 > java.lang.NullPointerException > at org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.put(LRUDictionary.java:109) > at org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.access$200(LRUDictionary.java:76) > at org.apache.hadoop.hbase.io.util.LRUDictionary.addEntry(LRUDictionary.java:62) > at org.apache.hadoop.hbase.io.TagCompressionContext.uncompressTags(TagCompressionContext.java:147) > at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.decodeTags(BufferedDataBlockEncoder.java:270) > at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:522) > at org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeFirst(FastDiffDeltaEncoder.java:535) > at org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.setCurrentBuffer(BufferedDataBlockEncoder.java:188) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.updateCurrentBlock(HFileReaderV2.java:1017) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1068) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:137) > at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108) > at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:509) > at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:217) > at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:76) > at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109) > at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1074) > at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1382) > at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:475) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > {code} > I am not able to reproduce this repeatedly. One thing is I altered the table to use COMPRESS_TAGS here before that it was false. > My feeling is this is not due to the COMPRESS_TAGS because we try to handle this per file by adding it in FILE_INFO. > In the above stack trace the problem has occured while compaction and so the flushed file should have this property set. I think the problem could be with LRUDicitonary. > the reason for NPE is > {code} > if (currSize < initSize) { > // There is space to add without evicting. > indexToNode[currSize].setContents(stored, 0, stored.length); > setHead(indexToNode[currSize]); > short ret = (short) currSize++; > nodeToIndex.put(indexToNode[ret], ret); > System.out.println(currSize); > return ret; > } else { > short s = nodeToIndex.remove(tail); > tail.setContents(stored, 0, stored.length); > // we need to rehash this. > nodeToIndex.put(tail, s); > moveToHead(tail); > return s; > } > {code} > Here > {code} > short s = nodeToIndex.remove(tail); > {code} > is giving a null value and the typecasting to short primitive is throwing NPE. Am digging this further to see if am able to reproduce this. -- This message was sent by Atlassian JIRA (v6.1.5#6160)