phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Monani Mihir (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (PHOENIX-5092) Client like PHERF tool's thread dies because of unhandled exception in MutationState#commit()
Date Wed, 13 Mar 2019 09:46:00 GMT

     [ https://issues.apache.org/jira/browse/PHOENIX-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Monani Mihir updated PHOENIX-5092:
----------------------------------
    Description: 
After starting Pherf Load with Unsalted table (and few idexes) , Data table region will split
and move to another region server. When region moves, client (all threads of client) will
die with following exception :-
{code:java}
2019-03-08 09:41:32,678 WARN [pool-8-thread-25] execute.MutationState - THREAD_ABORT MutationState#send(Iterator<TableRef>)
:-
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 36 actions: org.apache.hadoop.hbase.DoNotRetryIOException:
ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. key=1873403620592046670
region=PHERF:TABLE1,1552037797977.20beae29172b4bec422a6984e088eeae.host=phoenix-host1,60020,1552037496260
Index update failed
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:112)
at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
at org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:101)
at org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaData(PhoenixIndexMetaDataBuilder.java:51)
at org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:100)
at org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:73)
at org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexMetaData(IndexBuildManager.java:79)
at org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:385)
at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:345)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1025)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1727)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1021)
at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3309)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3076)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3018)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:914)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:842)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2397)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35080)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.
key=1873403620592046670 region=PHERF:TABLE1,1552037797977.20beae29172b4bec422a6984e088eeae.host=phoenix-host1,60020,1552037496260
at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:100)
... 22 more
: 36 times, servers with issues: phoenix-host1,60020,1552037496260
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:260)
at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2400(AsyncProcess.java:240)
at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1711)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:917)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:931)
at org.apache.phoenix.execute.MutationState$3.doMutation(MutationState.java:992)
at org.apache.phoenix.index.PhoenixIndexFailurePolicy.doBatchWithRetries(PhoenixIndexFailurePolicy.java:480)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:988)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1368)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1188)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
at org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)
at org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

{code}

  was:
After starting Pherf Load with Unsalted table (and few idexes) , Data table region will split
and move to another region server. When region moves, client (all threads of client) will
die with following exception :-
{code:java}
2018-12-19 10:45:22,830 WARN [pool-8-thread-39] cache.ServerCacheClient - Unable to remove
hash cache for [region=table1,1545216068270.0310dab896249506cb1de9b6badd7fa4., hostname=phoenix-test1,60020,1545213555354,
seqNum=40685]
java.io.InterruptedIOException: Interrupted calling coprocessor service org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService
for row tenantABCDId2loginUserId0030F900000000X418G\x00messageTextId007419receipientId000007420
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1787)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1736)
at org.apache.phoenix.cache.ServerCacheClient.removeServerCache(ServerCacheClient.java:357)
at org.apache.phoenix.cache.ServerCacheClient.access$000(ServerCacheClient.java:85)
at org.apache.phoenix.cache.ServerCacheClient$ServerCache.close(ServerCacheClient.java:207)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1072)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1350)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1173)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
at org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)
at org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
at java.util.concurrent.FutureTask.get(FutureTask.java:191)
at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1780)
... 17 more
[pool-8-thread-39] INFO org.apache.phoenix.execute.MutationState - Abort successful
{code}


> Client like PHERF tool's thread dies because of unhandled exception in MutationState#commit()
> ---------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5092
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5092
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.14.1
>            Reporter: Monani Mihir
>            Assignee: Monani Mihir
>            Priority: Major
>              Labels: SFDC, client
>
> After starting Pherf Load with Unsalted table (and few idexes) , Data table region will
split and move to another region server. When region moves, client (all threads of client)
will die with following exception :-
> {code:java}
> 2019-03-08 09:41:32,678 WARN [pool-8-thread-25] execute.MutationState - THREAD_ABORT
MutationState#send(Iterator<TableRef>) :-
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 36 actions:
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 (INT10): Unable
to find cached index metadata. key=1873403620592046670 region=PHERF:TABLE1,1552037797977.20beae29172b4bec422a6984e088eeae.host=phoenix-host1,60020,1552037496260
Index update failed
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:112)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:86)
> at org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:101)
> at org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaData(PhoenixIndexMetaDataBuilder.java:51)
> at org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:100)
> at org.apache.phoenix.index.PhoenixIndexBuilder.getIndexMetaData(PhoenixIndexBuilder.java:73)
> at org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexMetaData(IndexBuildManager.java:79)
> at org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:385)
> at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:345)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1025)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1727)
> at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1021)
> at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3309)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3076)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3018)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:914)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:842)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2397)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35080)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2399)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> Caused by: java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index metadata.
key=1873403620592046670 region=PHERF:TABLE1,1552037797977.20beae29172b4bec422a6984e088eeae.host=phoenix-host1,60020,1552037496260
> at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at org.apache.phoenix.index.PhoenixIndexMetaDataBuilder.getIndexMetaDataCache(PhoenixIndexMetaDataBuilder.java:100)
> ... 22 more
> : 36 times, servers with issues: phoenix-host1,60020,1552037496260
> at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:260)
> at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$2400(AsyncProcess.java:240)
> at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1711)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:917)
> at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:931)
> at org.apache.phoenix.execute.MutationState$3.doMutation(MutationState.java:992)
> at org.apache.phoenix.index.PhoenixIndexFailurePolicy.doBatchWithRetries(PhoenixIndexFailurePolicy.java:480)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:988)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1368)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1188)
> at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
> at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
> at org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)
> at org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message