Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 095B9D4D6 for ; Wed, 26 Sep 2012 22:29:08 +0000 (UTC) Received: (qmail 64365 invoked by uid 500); 26 Sep 2012 22:29:07 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 64318 invoked by uid 500); 26 Sep 2012 22:29:07 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 64309 invoked by uid 99); 26 Sep 2012 22:29:07 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 26 Sep 2012 22:29:07 +0000 Date: Thu, 27 Sep 2012 09:29:07 +1100 (NCT) From: "Phabricator (JIRA)" To: issues@hbase.apache.org Message-ID: <628340513.131055.1348698547797.JavaMail.jiratomcat@arcas> In-Reply-To: <783540961.116097.1348480807901.JavaMail.jiratomcat@arcas> Subject: [jira] [Updated] (HBASE-6871) HFileBlockIndex Write Error BlockIndex in HFile V2 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-6871?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:all-tabpanel ] Phabricator updated HBASE-6871: ------------------------------- Attachment: D5703.2.patch mbautin updated the revision "[jira] [HBASE-6871] [89-fb] Test case to repr= oduce block index corruption". Reviewers: lhofhansl, Kannan, Liyin, stack, JIRA Addressing Michael's comments. REVISION DETAIL https://reviews.facebook.net/D5703 AFFECTED FILES src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileInlineToRootChunk= Conversion.java To: lhofhansl, Kannan, Liyin, stack, JIRA, mbautin =20 > HFileBlockIndex Write Error BlockIndex in HFile V2 > -------------------------------------------------- > > Key: HBASE-6871 > URL: https://issues.apache.org/jira/browse/HBASE-6871 > Project: HBase > Issue Type: Bug > Components: HFile > Affects Versions: 0.94.1 > Environment: redhat 5u4 > Reporter: Fenng Wang > Priority: Critical > Fix For: 0.94.3, 0.96.0 > > Attachments: 428a400628ae412ca45d39fce15241fd.hfile, 787179746cc3= 47ce9bb36f1989d17419.hfile, 960a026ca370464f84903ea58114bc75.hfile, d0026fa= 8d59b4df291718f59dd145aad.hfile, D5703.1.patch, D5703.2.patch, hbase-6871-0= .94.patch, ImportHFile.java, test_hfile_block_index.sh > > > After writing some data, compaction and scan operation both failure, the = exception message is below: > 2012-09-18 06:32:26,227 ERROR org.apache.hadoop.hbase.regionserver.compac= tions.CompactionRequest: Compaction failed regionName=3Dhfile_test,,1347778= 722498.d220df43fb9d8af4633bd7f547613f9e., storeName=3Dpage_info, fileCount= =3D7, fileSize=3D1.3m (188.0k, 188.0k, 188.0k, 188.0k, 188.0k, 185.8k, 223.= 3k), priority=3D9, time=3D45826250816757428java.io.IOException: Could not r= eseek StoreFileScanner[HFileScanner for reader reader=3Dhdfs://hadoopdev1.c= m6:9000/hbase/hfile_test/d220df43fb9d8af4633bd7f547613f9e/page_info/b0f6118= f58de47ad9d87cac438ee0895, compression=3Dlzo, cacheConf=3DCacheConfig:enabl= ed [cacheDataOnRead=3Dtrue] [cacheDataOnWrite=3Dfalse] [cacheIndexesOnWrite= =3Dfalse] [cacheBloomsOnWrite=3Dfalse] [cacheEvictOnClose=3Dfalse] [cacheCo= mpressed=3Dfalse], firstKey=3Dhttp://com.truereligionbrandjeans.www/Womens_= Dresses/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Women= s_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Wome= ns_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Wom= ens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Wo= mens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweater= s/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_= Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c/Wome= ns_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Shirts/pl/c/Womens_Shirts/pl/c= /Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Sweaters/pl/c/Womens_Swea= ters/pl/c/Womens_Sweaters/pl/c/4010.html/page_info:anchor_sig/1347764439449= /DeleteColumn, lastKey=3Dhttp://com.trura.www//page_info:page_type/13477633= 95089/Put, avgKeyLen=3D776, avgValueLen=3D4, entries=3D12853, length=3D2286= 11, cur=3Dhttp://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl= /c/4970.html/page_info:is_deleted/1347764003865/Put/vlen=3D1/ts=3D0] to key= http://com.truereligionbrandjeans.www/Womens_Exclusive_Details/pl/c/4970.h= tml/page_info:is_deleted/OLDEST_TIMESTAMP/Minimum/vlen=3D0/ts=3D0 > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(S= toreFileScanner.java:178) =20 > at org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.do= RealSeek(NonLazyKeyValueScanner.java:54) =20 > at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedS= eek(KeyValueHeap.java:299) > at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyVa= lueHeap.java:244) =20 > at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(Store= Scanner.java:521) =20 > at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreSc= anner.java:402) > at org.apache.hadoop.hbase.regionserver.Store.compactStore(Store.= java:1570) =20 > at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:= 997) =20 > at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.j= ava:1216) > at org.apache.hadoop.hbase.regionserver.compactions.CompactionReq= uest.run(CompactionRequest.java:250) =20 > at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadP= oolExecutor.java:886) =20 > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolE= xecutor.java:908) > at java.lang.Thread.run(Thread.java:662) > Caused by: java.io.IOException: Expected block type LEAF_INDEX, but got I= NTERMEDIATE_INDEX: blockType=3DINTERMEDIATE_INDEX, onDiskSizeWithoutHeader= =3D8514, uncompressedSizeWithoutHeader=3D131837, prevBlockOffset=3D-1, data= BeginsWith=3D\x00\x00\x00\x9B\x00\x00\x00\x00\x00\x00\x03#\x00\x00\x050\x00= \x00\x08\xB7\x00\x00\x0Cr\x00\x00\x0F\xFA\x00\x00\x120, fileOffset=3D218942= at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.validateBlockType= (HFileReaderV2.java:378) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFile= ReaderV2.java:331) at org.apache.hadoop.hbase.io.hfile.HFileBlockInd= ex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:213) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScanner= V2.seekTo(HFileReaderV2.java:455) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScanner= V2.reseekTo(HFileReaderV2.java:493) =20 > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAt= OrAfter(StoreFileScanner.java:242) =20 > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(S= toreFileScanner.java:167) > After some debug works=EF=BC=8CI found that when hfile closing, if the ro= otChunk is empty, the only one curInlineChunk will upgrade to root chunk. B= ut if the last block flushing make curInlineChunk exceed max index block si= ze, the root chunk(upgrade from curInlineChunk) will be splited into interm= ediate index blocks, and the index level is set to 2. So when BlockIndexRea= der read the root index, it expects the next level index block is leaf inde= x(index level=3D2), but the on disk index block is intermediate block, the = error happened.=20 > After I add some code to check curInlineChunk's size when rootChunk is em= pty in shouldWriteBlock(boolean closing), this bug can be fixed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrato= rs For more information on JIRA, see: http://www.atlassian.com/software/jira