hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: StackOverflowError while compaction?
Date Sun, 07 Jan 2018 00:50:18 GMT
Can you provide a bit more information ?

data block encoding for the column family where this error occurred

pastebin of more of the region server prior to the StackOverflowError
(after redaction)

release of hadoop for the hdfs cluster

non-default config which may be related

Thanks


On Sat, Jan 6, 2018 at 4:36 PM, Kang Minwoo <minwoo.kang@outlook.com> wrote:

> Hello,
>
> I have met StackOverflowError in region server.
> Detail Error log here...
>
> HBase version is 1.2.6
>
> DAYS:36,787 DEBUG [regionserver/longCompactions] regionserver.CompactSplitThread:
> Not compacting xxx. because compaction request was cancelled
> DAYS:36,787 DEBUG [regionserver/shortCompactions] compactions.ExploringCompactionPolicy:
> Exploring compaction algori
> thm has selected 0 files of size 0 starting at candidate #-1 after
> considering 3 permutations with 0 in ratio
> DAYS:36,787 DEBUG [regionserver/shortCompactions] compactions.RatioBasedCompactionPolicy:
> Not compacting files becau
> se we only have 0 files ready for compaction. Need 3 to initiate.
> DAYS:36,787 DEBUG [regionserver/shortCompactions] regionserver.CompactSplitThread:
> Not compacting xxx. because compaction request was cancelled
> DAYS:38,028 ERROR [B.defaultRpcServer.handler=x,queue=x,port=x]
> ipc.RpcServer: Unexpected throwable object
> java.lang.StackOverflowError
>         at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.unref(
> ShortCircuitCache.java:525)
>         at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitReplica.unref(
> ShortCircuitReplica.java:141)
>         at org.apache.hadoop.hdfs.BlockReaderLocal.close(
> BlockReaderLocal.java:644)
>         at org.apache.hadoop.hdfs.DFSInputStream.closeCurrentBlockReader(
> DFSInputStream.java:1682)
>         at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(
> DFSInputStream.java:616)
>         at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(
> DFSInputStream.java:882)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(
> DFSInputStream.java:934)
>         at java.io.DataInputStream.read(DataInputStream.java:149)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock.
> readWithExtra(HFileBlock.java:709)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock$
> AbstractFSReader.readAtOffset(HFileBlock.java:1440)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> readBlockDataInternal(HFileBlock.java:1648)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.
> readBlockData(HFileBlock.java:1532)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(
> HFileReaderV2.java:452)
>         at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$
> BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:271)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$
> AbstractScannerV2.seekTo(HFileReaderV2.java:649)
>         at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$
> AbstractScannerV2.seekTo(HFileReaderV2.java:599)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekAtOrAfter(StoreFileScanner.java:268)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:461)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>         at org.apache.hadoop.hbase.regionserver.StoreFileScanner.
> seekToPreviousRow(StoreFileScanner.java:476)
>
> Has anyone experienced similar problems?
>
> Best regards,
> Minwoo Kang
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message