From dev-return-54678-archive-asf-public=cust-asf.ponee.io@phoenix.apache.org Wed Nov 28 10:46:05 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id E6161180658 for ; Wed, 28 Nov 2018 10:46:04 +0100 (CET) Received: (qmail 13065 invoked by uid 500); 28 Nov 2018 09:46:03 -0000 Mailing-List: contact dev-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@phoenix.apache.org Delivered-To: mailing list dev@phoenix.apache.org Received: (qmail 13054 invoked by uid 99); 28 Nov 2018 09:46:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Nov 2018 09:46:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 74F5FCF556 for ; Wed, 28 Nov 2018 09:46:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.801 X-Spam-Level: X-Spam-Status: No, score=-109.801 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_NUMSUBJECT=0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id YjXO3DS5az-i for ; Wed, 28 Nov 2018 09:46:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 2269D5FC21 for ; Wed, 28 Nov 2018 09:46:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 5AB5FE0E1D for ; Wed, 28 Nov 2018 09:46:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 20E5424DD3 for ; Wed, 28 Nov 2018 09:46:00 +0000 (UTC) Date: Wed, 28 Nov 2018 09:46:00 +0000 (UTC) From: "Pedro Boado (JIRA)" To: dev@phoenix.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (PHOENIX-5047) can't upgrade phoenix from 4.13 to 4.14.1 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/PHOENIX-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pedro Boado updated PHOENIX-5047: --------------------------------- Environment: custom build (CB) of 4.13 on top of cdh 5.13.0 , upgrading to CB of 4.14.1 on top of hbase cdh 5.14.2 ( was: 4.13 on top of cdh 5.13.0 upgrading to 4.14.1 on top of hbase cdh 5.14.2 > can't upgrade phoenix from 4.13 to 4.14.1 > ----------------------------------------- > > Key: PHOENIX-5047 > URL: https://issues.apache.org/jira/browse/PHOENIX-5047 > Project: Phoenix > Issue Type: Bug > Affects Versions: 4.14.1 > Environment: custom build (CB) of 4.13 on top of cdh 5.13.0 , upgrading to CB of 4.14.1 on top of hbase cdh 5.14.2 ( > Reporter: Ievgen Nekrashevych > Priority: Major > Labels: cdh > > The upgrade scenario as following: > install phoenix 4.13 on top of hbase 1.2.0-cdh5.13.0. Run simple script to make sure some data is there: > {code} > -- system tables are created on the first connection > create schema if not exists TS > create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not null, STARTTIME integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY (STR, INTCOL)) > create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME) > upsert into TS.TEST(STR,INTCOL,STARTTIME,DUMMY) values ('TEST',4,1,3) > -- made sure there is a data > select * from TS.TEST > {code} > then I shut down everything (queryserver, regionserver, master and zookeeper), install hbase 1.2.0-cdh5.14.2, replace phoenix libs with 4.14.1 and start servers. Trying to connect to the server and run: > {code} > select * from TS.TEST > {code} > I get: > {code} > 2018-11-28 07:53:03,088 ERROR [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] coprocessor.MetaDataEndpointImpl: Add column failed: > org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63 > at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120) > at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2368) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumn(MetaDataEndpointImpl.java:3242) > at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16402) > at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7931) > at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1969) > at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1951) > at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) > Caused by: java.lang.ArrayIndexOutOfBoundsException: 63 > at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517) > at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421) > at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(MetaDataEndpointImpl.java:2361) > ... 10 more > {code} > In subsequent calls I get same exception with slightly different message that I've got different versions of client and server jars (with ArrayIndexOutOfBoundsException as cause, and only ArrayIndexOutOfBoundsException in server logs), which is not true. > Serverside exception: > {code} > 2018-11-28 08:45:00,611 ERROR [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020] coprocessor.MetaDataEndpointImpl: loading system catalog table inside getVersion failed > java.lang.ArrayIndexOutOfBoundsException: 63 > at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517) > at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421) > at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1339) > at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3721) > at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422) > at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996) > at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986) > at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968) > at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) > at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) > {code} > clientside: > {code} > [2018-11-28 10:45:00] [INT08][2006] ERROR 2006 (INT08): Incompatible jars detected between client and server. Ensure that phoenix-[version]-server.jar is put on the classpath of HBase in every region server: org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63 > [2018-11-28 10:45:00] at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:120) > [2018-11-28 10:45:00] at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3726) > [2018-11-28 10:45:00] at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16422) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7996) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1986) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1968) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183) > [2018-11-28 10:45:00] at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163) > [2018-11-28 10:45:00] Caused by: java.lang.ArrayIndexOutOfBoundsException: 63 > [2018-11-28 10:45:00] at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:517) > [2018-11-28 10:45:00] at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421) > [2018-11-28 10:45:00] at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406) > [2018-11-28 10:45:00] at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1073) > [2018-11-28 10:45:00] at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:614) > [2018-11-28 10:45:00] at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:1339) > [2018-11-28 10:45:00] at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getVersion(MetaDataEndpointImpl.java:3721) > [2018-11-28 10:45:00] ... 9 more > {code} > Note: phoenix.schema.isNamespaceMappingEnabled is set to true. -- This message was sent by Atlassian JIRA (v7.6.3#76005)