hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pankaj kr <pankaj...@huawei.com>
Subject Region comapction failed
Date Fri, 13 Jan 2017 10:47:40 GMT
Hi,

We met a weird issue in our production environment.

Region compaction is always failing with  following errors,

1.
2017-01-10 02:19:10,427 | ERROR | regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483858654825
| Compaction failed Request = regionName=XXXX., storeName=XYZ, fileCount=6, fileSize=100.7
M (3.2 M, 20.8 M, 15.1 M, 20.9 M, 21.0 M, 19.7 M), priority=-5, time=1747414906352088 | org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
java.io.IOException: ScanWildcardColumnTracker.checkColumn ran into a column actually smaller
than the previous column:  XXXXXXX
                at org.apache.hadoop.hbase.regionserver.ScanWildcardColumnTracker.checkVersions(ScanWildcardColumnTracker.java:114)
                at org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:457)
                at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:551)
                at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:328)
                at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
                at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
                at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
                at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
                at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
                at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                at java.util.concurrent.ThreadPoolExecuto

2.
2017-01-10 02:33:53,009 | ERROR | regionserver/RS-HOST/RS-IP:PORT-longCompactions-1483686810953
| Compaction failed Request = regionName=YYYYYY, storeName=ABC, fileCount=6, fileSize=125.3
M (20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M, 20.9 M), priority=-68, time=1748294500157323 |
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:562)
java.io.IOException: Non-increasing Bloom keys: XXXXXXXXXXXXXXXXXXXXXX after XXXXXXXXXXXX
                at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:911)
                at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:947)
                at org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:337)
                at org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:104)
                at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:133)
                at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1243)
                at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1895)
                at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:546)
                at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:583)
                at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                at java.lang.Thread.run(Thread.java:745)

HBase version : 1.0.2

We have verified all the HFiles in the store using HFilePrettyPrinter with "k" (checkrow),
all report is normal. Full scan is also successful.
We don't have the access to the actual data and may be customer wont agree to  share that
.

Have anyone faced this issue, any pointers will be much appreciated.

Thanks & Regards,
Pankaj

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message