Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1BB57174B7 for ; Tue, 28 Jul 2015 10:50:10 +0000 (UTC) Received: (qmail 33600 invoked by uid 500); 28 Jul 2015 10:50:05 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 33557 invoked by uid 500); 28 Jul 2015 10:50:05 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 33545 invoked by uid 99); 28 Jul 2015 10:50:04 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Jul 2015 10:50:04 +0000 Date: Tue, 28 Jul 2015 10:50:04 +0000 (UTC) From: "ramkrishna.s.vasudevan (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HBASE-14155) StackOverflowError in reverse scan MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14644190#comment-14644190 ] ramkrishna.s.vasudevan edited comment on HBASE-14155 at 7/28/15 10:49 AM: -------------------------------------------------------------------------- [~giacomotaylor] The reason why you could NOT directly reproduce this with your test code was because you did not set the DataBlockEncoding on the CF. was (Author: ram_krish): [~giacomotaylor] The reason why you could directly reproduce this with your test code was because you did not set the DataBlockEncoding on the CF. > StackOverflowError in reverse scan > ---------------------------------- > > Key: HBASE-14155 > URL: https://issues.apache.org/jira/browse/HBASE-14155 > Project: HBase > Issue Type: Bug > Components: regionserver, Scanners > Affects Versions: 1.1.0 > Reporter: James Taylor > Assignee: ramkrishna.s.vasudevan > Priority: Critical > Labels: Phoenix > Attachments: HBASE-14155.patch, ReproReverseScanStackOverflow.java, ReproReverseScanStackOverflowCoprocessor.java > > > A stack overflow may occur when a reverse scan is done. To reproduce (on a Mac), use the following steps: > - Download the Phoenix 4.5.0 RC here: https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/ > - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory (removing any earlier Phoenix version if there was one installed) > - Stop and restart HBase > - From the bin directory of the Phoenix binary distribution, start sqlline like this: ./sqlline.py localhost > - Create a new table and populate it like this: > {code} > create table desctest (k varchar primary key desc); > upsert into desctest values ('a'); > upsert into desctest values ('ab'); > upsert into desctest values ('b'); > {code} > - Note that the following query works fine at this point: > {code} > select * from desctest order by k; > +------------------------------------------+ > | K | > +------------------------------------------+ > | a | > | ab | > | b | > +------------------------------------------+ > {code} > - Stop and start HBase > - Rerun the above query again and you'll get a StackOverflowError at StoreFileScanner.seekToPreviousRow() > {code} > select * from desctest order by k; > java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null > at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84) > at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52) > at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352) > at org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77) > at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393) > at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101) > at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.StackOverflowError > at org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201) > at org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189) > at org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826) > at org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657) > at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449) > at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449) > {code} > I've attempted to reproduce this in a standalone HBase unit test, but have not been able to (but I'll attach my attempt which mimics what Phoenix is doing). -- This message was sent by Atlassian JIRA (v6.3.4#6332)