Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E13C817E99 for ; Wed, 24 Jun 2015 15:17:05 +0000 (UTC) Received: (qmail 35044 invoked by uid 500); 24 Jun 2015 15:17:05 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 34988 invoked by uid 500); 24 Jun 2015 15:17:05 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 34923 invoked by uid 99); 24 Jun 2015 15:17:05 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 24 Jun 2015 15:17:05 +0000 Date: Wed, 24 Jun 2015 15:17:05 +0000 (UTC) From: "reaz hedayati (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HBASE-13962) Invalid HFile block magic MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-13962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] reaz hedayati updated HBASE-13962: ---------------------------------- Description: hi every body my table has some cell that load with bulk load scenario and some cells for increment. we use 2 job to load data into table, first job use increment in reduce site and second job use bulk load. first we run increment job, next run bulk job and run completebulkload job, after that we got this exception: 2015-06-24 17:40:01,557 INFO [regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: Starting compaction on c2 in region table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. 2015-06-24 17:40:01,558 INFO [regionserver60020-smallCompactions-1434448531302] regionserver.HStore: Starting compaction of 3 file(s) in c2 of table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. into tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp, totalSize=43.1m 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] regionserver.StoreFileInfo: reference 'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5' to region=d21f8ee8b3c915fd9e1c143a0f1892e5 hfile=6b1249a3b474474db5cf6c664f2d98dc 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top, keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, earliestPutTs=1434875448405 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_, keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, earliestPutTs=1435076732205 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_, keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, earliestPutTs=1435136926850 2015-06-24 17:40:01,560 ERROR [regionserver60020-smallCompactions-1434448531302] regionserver.CompactSplitThread: Compaction failed Request = regionName=table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072 java.io.IOException: Could not seek StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0 at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329) at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:252) at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214) at org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299) at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87) at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Failed to read compressed block at 10930320, onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header bytes: \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1413) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:394) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:539) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:560) at org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$Scanner.seekTo(AbstractHFileReader.java:308) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:205) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152) ... 12 more Caused by: java.io.IOException: Invalid HFile block magic: \x00\x00\x00\x00\x00\x00\x00\x00 at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165) at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:252) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1546) ... 21 more was: hi every body my table has some cell that load with bulk load scenario and some cells for increment. we use 2 job to load data into table, first job use increment in reduce site and second job use bulk load. first we run increment job, next run bulk job, after that we got this exception: 2015-06-24 17:40:01,557 INFO [regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: Starting compaction on c2 in region table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. 2015-06-24 17:40:01,558 INFO [regionserver60020-smallCompactions-1434448531302] regionserver.HStore: Starting compaction of 3 file(s) in c2 of table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. into tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp, totalSize=43.1m 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] regionserver.StoreFileInfo: reference 'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5' to region=d21f8ee8b3c915fd9e1c143a0f1892e5 hfile=6b1249a3b474474db5cf6c664f2d98dc 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top, keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, earliestPutTs=1434875448405 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_, keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, earliestPutTs=1435076732205 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_, keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, earliestPutTs=1435136926850 2015-06-24 17:40:01,560 ERROR [regionserver60020-smallCompactions-1434448531302] regionserver.CompactSplitThread: Compaction failed Request = regionName=table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072 java.io.IOException: Could not seek StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0 at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164) at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329) at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:252) at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214) at org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299) at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87) at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112) at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113) at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519) at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Failed to read compressed block at 10930320, onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header bytes: \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1413) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:394) at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:539) at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:560) at org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$Scanner.seekTo(AbstractHFileReader.java:308) at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:205) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244) at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152) ... 12 more Caused by: java.io.IOException: Invalid HFile block magic: \x00\x00\x00\x00\x00\x00\x00\x00 at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165) at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:252) at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1546) ... 21 more > Invalid HFile block magic > ------------------------- > > Key: HBASE-13962 > URL: https://issues.apache.org/jira/browse/HBASE-13962 > Project: HBase > Issue Type: Bug > Affects Versions: 0.98.12.1 > Environment: hadoop 1.2.1 > hbase 0.98.12.1 > jdk 1.7.0.79 > os : ubuntu 12.04.1 amd64 > Reporter: reaz hedayati > > hi every body > my table has some cell that load with bulk load scenario and some cells for increment. > we use 2 job to load data into table, first job use increment in reduce site and second job use bulk load. > first we run increment job, next run bulk job and run completebulkload job, after that we got this exception: > 2015-06-24 17:40:01,557 INFO [regionserver60020-smallCompactions-1434448531302] regionserver.HRegion: Starting compaction on c2 in region table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. > 2015-06-24 17:40:01,558 INFO [regionserver60020-smallCompactions-1434448531302] regionserver.HStore: Starting compaction of 3 file(s) in c2 of table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489. into tmpdir=hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/.tmp, totalSize=43.1m > 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] regionserver.StoreFileInfo: reference 'hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5' to region=d21f8ee8b3c915fd9e1c143a0f1892e5 hfile=6b1249a3b474474db5cf6c664f2d98dc > 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/6b1249a3b474474db5cf6c664f2d98dc.d21f8ee8b3c915fd9e1c143a0f1892e5-hdfs://m2/hbase2/data/default/table1/d21f8ee8b3c915fd9e1c143a0f1892e5/c2/6b1249a3b474474db5cf6c664f2d98dc-top, keycount=575485, bloomtype=ROW, size=20.8m, encoding=NONE, seqNum=9, earliestPutTs=1434875448405 > 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/41e13b20ee79435ebc260d11d3bf9920_SeqId_11_, keycount=562988, bloomtype=ROW, size=10.1m, encoding=NONE, seqNum=11, earliestPutTs=1435076732205 > 2015-06-24 17:40:01,558 DEBUG [regionserver60020-smallCompactions-1434448531302] compactions.Compactor: Compacting hdfs://m2/hbase2/data/default/table1/0fe38a6c782600e4d46f1f148144b489/c2/565c45ff05b14a419978834c86defa1a_SeqId_12_, keycount=554577, bloomtype=ROW, size=12.2m, encoding=NONE, seqNum=12, earliestPutTs=1435136926850 > 2015-06-24 17:40:01,560 ERROR [regionserver60020-smallCompactions-1434448531302] regionserver.CompactSplitThread: Compaction failed Request = regionName=table1,\x04C#P1"\x07\x94 ,1435065082383.0fe38a6c782600e4d46f1f148144b489., storeName=c2, fileCount=3, fileSize=43.1m (20.8m, 10.1m, 12.2m), priority=1, time=6077271921381072 > java.io.IOException: Could not seek StoreFileScanner[org.apache.hadoop.hbase.io.HalfStoreFileReader$1@1d1eb574, cur=null] to key /c2:/LATEST_TIMESTAMP/DeleteFamily/vlen=0/mvcc=0 > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:164) > at org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:329) > at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:252) > at org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214) > at org.apache.hadoop.hbase.regionserver.compactions.Compactor.createScanner(Compactor.java:299) > at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:87) > at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:112) > at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1113) > at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1519) > at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:498) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Failed to read compressed block at 10930320, onDiskSizeWithoutHeader=22342, preReadHeaderSize=33, header.length=33, header bytes: \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1549) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1413) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:394) > at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:253) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:539) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:560) > at org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$Scanner.seekTo(AbstractHFileReader.java:308) > at org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:205) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:244) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:152) > ... 12 more > Caused by: java.io.IOException: Invalid HFile block magic: \x00\x00\x00\x00\x00\x00\x00\x00 > at org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:165) > at org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:252) > at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1546) > ... 21 more -- This message was sent by Atlassian JIRA (v6.3.4#6332)