hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Fan <wjruo...@gmail.com>
Subject Re: Hbase compaction failed
Date Fri, 10 Oct 2014 01:55:50 GMT
hbase.regionserver.checksum.verify = true
hbase.hstore.checksum.algorithm = CRC32


On Thu, Oct 9, 2014 at 10:06 PM, Ted Yu <yuzhihong@gmail.com> wrote:

> What're the values for the following config ?
>
> hbase.regionserver.checksum.verify
> hbase.hstore.checksum.algorithm
>
> Cheers
>
> On Thu, Oct 9, 2014 at 6:29 AM, Steve Fan <wjruoxue@gmail.com> wrote:
>
> > I'm getting compaction failure for a region after heavy writes.
> >
> > Running hbase hbck -details returns everything is OK.
> >
> > I'm running 0.98.1-cdh5.1.0
> >
> > ERROR org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction
> > failed Request =
> > regionName=table,rowkey,1409072707535.b3481b3baef0fdc711b178caf6a6072a.,
> > storeName=data, fileCount=3, fileSize=474.5 M (129.4 M, 214.8 M, 130.3
> M),
> > priority=1, time=8587702075256007
> > java.lang.IndexOutOfBoundsException
> > at java.nio.ByteBuffer.wrap(ByteBuffer.java:371)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:343)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateBlockChecksum(ChecksumUtil.java:150)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.validateBlockChecksum(HFileBlock.java:1573)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockDataInternal(HFileBlock.java:1509)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1314)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:355)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:605)
> > at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:719)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:136)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:507)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:217)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:76)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:109)
> > at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1080)
> > at
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1482)
> > at
> >
> >
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:475)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:745)
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message