phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nithin (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (PHOENIX-3196) Array Index Out Of Bounds Exception
Date Tue, 23 Aug 2016 00:24:21 GMT

     [ https://issues.apache.org/jira/browse/PHOENIX-3196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Nithin updated PHOENIX-3196:
----------------------------
    Description: 
Data Set Size - Table with 156 Million Rows and 200 Columns

Seems like this issue is resolved in Phoenix 3.0. But its still recurring

Phoenix throws the following exception -

Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
	... 10 more


To reproduce -
1) Create Table
2) While creating indexes create an index with multiple occurances of same column name - Phoenix
throws an error stating that the column name is used multiple times
3) Correct it and try to run the index creation again.

Please note that - One of the columns on which the Index was being created is a "BigInt"

Not sure, if running a faulty index creation DDL is the root cause of this exception. But
started seeing this after doing the above steps.

Effects -
1) Unable to read and write.  All queries will throw the same exception as above

  was:
Data Set Size - Table with 156 Million Rows and 200 Columns

Seems like this issue is resolved in Phoenix 3.0. But its still recurring

Phoenix throws the following exception -

Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
	... 10 more


To reproduce -
1) Create Table
2) While creating indexes create an index with multiple occurances of same column name - Phoenix
throws an error stating that the column name is used multiple times
3) Correct it and try to run the index creation again.

Please note that - One of the columns on which the Index was being created is a "BigInt"

Not sure, if running a faulty index creation DDL is the root cause of this exception. But
started seeing this after doing the above steps


> Array Index Out Of Bounds Exception
> -----------------------------------
>
>                 Key: PHOENIX-3196
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3196
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>         Environment: Amazon EMR - 4.7.2
>            Reporter: Nithin
>            Priority: Critical
>             Fix For: 4.7.0
>
>
> Data Set Size - Table with 156 Million Rows and 200 Columns
> Seems like this issue is resolved in Phoenix 3.0. But its still recurring
> Phoenix throws the following exception -
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
> 	at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
> 	at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
> 	at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
> 	at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
> 	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
> 	at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
> 	at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> 	at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
> 	at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
> 	at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
> 	at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
> 	at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
> 	... 10 more
> To reproduce -
> 1) Create Table
> 2) While creating indexes create an index with multiple occurances of same column name
- Phoenix throws an error stating that the column name is used multiple times
> 3) Correct it and try to run the index creation again.
> Please note that - One of the columns on which the Index was being created is a "BigInt"
> Not sure, if running a faulty index creation DDL is the root cause of this exception.
But started seeing this after doing the above steps.
> Effects -
> 1) Unable to read and write.  All queries will throw the same exception as above



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message