phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ievgen Nekrashevych (JIRA)" <>
Subject [jira] [Created] (PHOENIX-5047) can't upgrade phoenix from 4.13 to 4.14.1
Date Wed, 28 Nov 2018 08:40:00 GMT
Ievgen Nekrashevych created PHOENIX-5047:

             Summary: can't upgrade phoenix from 4.13 to 4.14.1
                 Key: PHOENIX-5047
             Project: Phoenix
          Issue Type: Bug
    Affects Versions: 4.14.1
         Environment: 4.13 on top of cdh 5.13.0
upgrading to 4.14.1 on top of hbase cdh 5.14.2

            Reporter: Ievgen Nekrashevych

The upgrade scenario as following:
install phoenix 4.13 on top of hbase 1.2.0-cdh5.13.0. Run simple script to make sure some
data is there:
-- system tables are created on the first connection
create schema if not exists TS
create table if not exists TS.TEST (STR varchar not null,INTCOL bigint not null, STARTTIME
integer, DUMMY integer default 0 CONSTRAINT PK PRIMARY KEY (STR, INTCOL))
create local index if not exists "TEST_INDEX" on TS.TEST (STR,STARTTIME)
upsert into TS.TEST(STR,INTCOL,STARTTIME,DUMMY) values ('TEST',4,1,3)
-- made sure there is a data
select * from TS.TEST

then I shut down everything (queryserver, regionserver, master and zookeeper), install hbase
1.2.0-cdh5.14.2, replace phoenix libs with 4.14.1 and start servers. Trying to connect to
the server and run:
select * from TS.TEST

I get:
2018-11-28 07:53:03,088 ERROR [RpcServer.FifoWFPBQ.default.handler=29,queue=2,port=60020]
coprocessor.MetaDataEndpointImpl: Add column failed: 
org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM:CATALOG: 63
        at org.apache.phoenix.util.ServerUtil.createIOException(
        at org.apache.phoenix.util.ServerUtil.throwIOException(
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumn(
        at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(
        at org.apache.hadoop.hbase.ipc.RpcExecutor$
        at org.apache.hadoop.hbase.ipc.RpcExecutor$
Caused by: java.lang.ArrayIndexOutOfBoundsException: 63
        at org.apache.phoenix.schema.PTableImpl.init(
        at org.apache.phoenix.schema.PTableImpl.<init>(
        at org.apache.phoenix.schema.PTableImpl.makePTable(
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(
        at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.mutateColumn(
        ... 10 more

In subsequent calls I get same exception with slightly different message that I've got different
client and server jars, which is not true.

This message was sent by Atlassian JIRA

View raw message