phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rajeshbabu Chintaguntla (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (PHOENIX-4685) Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch
Date Thu, 12 Apr 2018 10:13:00 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435279#comment-16435279
] 

Rajeshbabu Chintaguntla edited comment on PHOENIX-4685 at 4/12/18 10:12 AM:
----------------------------------------------------------------------------

[~ankit@apache.org] [~elserj] Agree with you that we should not use the short circuit connection
at server to respect custom configurations. Currently also we have the mechanism to create
our own hconnection and caching but it's region wise so we are seeing this over head of so
many threads created for each connection. But we can have one connection per server which
acts same as short circuit connection at server.
I have uploaded simple patch making connection to static so that only one connection will
be created. Please review.


was (Author: rajeshbabu):
[~ankit@apache.org] [~elserj] Agree with you that we should not use the short circuit connection
at server to respect custom configurations. Currently also we have the mechanism to create
our own hconnection and caching but it's region wise so we are seeing this over head of so
many threads created for each connection. But we can have one connection per server which
acts same as short circuit connection at server.
I have uploaded simple patch to do the same. Please review.

> Parallel writes continuously to indexed table failing with OOME very quickly in 5.x branch
> ------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-4685
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4685
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Rajeshbabu Chintaguntla
>            Assignee: Rajeshbabu Chintaguntla
>            Priority: Major
>             Fix For: 5.0.0
>
>         Attachments: PHOENIX-4685.patch, PHOENIX-4685_jstack, PHOENIX-4685_v2.patch
>
>
> Currently trying to write data to indexed table failing with OOME where unable to create
native threads. But it's working fine with 4.7.x branches. Found many threads created for
meta lookup and shared threads and no space to create threads. This is happening even with
short circuit writes enabled.
> {noformat}
> 2018-04-08 13:06:04,747 WARN  [RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16020]
index.PhoenixIndexFailurePolicy: handleFailure failed
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
>         at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
>         at org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailureWithExceptions(PhoenixIndexFailurePolicy.java:217)
>         at org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:143)
>         at org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:160)
>         at org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:144)
>         at org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:632)
>         at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:607)
>         at org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:590)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1037)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1034)
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
>         at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
>         at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1034)
>         at org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:3533)
>         at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3914)
>         at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3822)
>         at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3753)
>         at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
>         at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
>         at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
>         at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
>         at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>         at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>         at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.reflect.UndeclaredThrowableException
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1761)
>         at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
>         at org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)
>         at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
>         at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:183)
>          ... 25 more
> Caused by: java.lang.Exception: java.lang.OutOfMemoryError: unable to create new native
thread
>         at org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:266)
>         at org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:217)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>         ... 32 more
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:714)
>         at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
>         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
>         at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
>         at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1007)
>         at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:986)
>         at org.apache.phoenix.util.IndexUtil.updateIndexState(IndexUtil.java:724)
>         at org.apache.phoenix.util.IndexUtil.updateIndexState(IndexUtil.java:709)
>         at org.apache.phoenix.index.PhoenixIndexFailurePolicy$1.run(PhoenixIndexFailurePolicy.java:236)
>         ... 36 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message