phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Taylor (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (PHOENIX-3772) Local Index - table split fails and OOM
Date Thu, 11 May 2017 17:28:04 GMT

     [ https://issues.apache.org/jira/browse/PHOENIX-3772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

James Taylor resolved PHOENIX-3772.
-----------------------------------
    Resolution: Cannot Reproduce

> Local Index - table split fails and OOM
> ---------------------------------------
>
>                 Key: PHOENIX-3772
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3772
>             Project: Phoenix
>          Issue Type: Bug
>         Environment: HBase 0.98.23, Phoenix v4.9.0 or current head of 4.x-0.98 branch.
4 node cluster with 12G heap
>            Reporter: Mujtaba Chohan
>
> Used the following DDL for base table + local index and load data in this empty table
with a batch size of 1000.
> {noformat}
> CREATE TABLE IF NOT EXISTS T (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,
> PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID CHAR(15), CREATED_BY_ID
VARCHAR,
> FH VARCHAR, DT VARCHAR, OS VARCHAR, NS VARCHAR, OFN VARCHAR CONSTRAINT
> PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI )) VERSIONS=1,MULTI_TENANT=true,IMMUTABLE_ROWS=true
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON T (PKF, CRD, PKP, EHI)
> {noformat}
> Either with manual or automatic split of the table, state remains in SPLITTING_NEW forever
and memory consumption creeps on going up until OOM. There are also a number of scan appears
to have been leaked warning in update statistics.
> Relevant log:
> {noformat}
> 14:23:53,945 INFO  [RS:0;vampire8:33129-splits-1491600233914] regionserver.SplitTransaction:
Starting split of region T,,1491598353059.a4803019c76c77f67034c8526188b7f7.
> 14:23:53,946 DEBUG [RS:0;vampire8:33129-splits-1491600233914] regionserver.SplitTransaction:
regionserver:33129-0x15b44dc4ed70001, =quorum, baseZNode=/hbase Creating ephemeral node for
a4803019c76c77f67034c8526188b7f7 in PENDING_SPLIT state
> 14:23:53,957 DEBUG [RS:0;vampire8:33129-splits-1491600233914] regionserver.SplitTransaction:
Still waiting for master to process the pending_split for a4803019c76c77f67034c8526188b7f7
> 14:23:53,957 DEBUG [RS:0;vampire8:33129-splits-1491600233914] zookeeper.ZKAssign: regionserver:33129-0x15b44dc4ed70001,
=quorum, baseZNode=/hbase Transitioning a4803019c76c77f67034c8526188b7f7 from RS_ZK_REQUEST_REGION_SPLIT
to RS_ZK_REQUEST_REGION_SPLIT
> 14:23:53,968 DEBUG [RS:0;vampire8:33129-splits-1491600233914] zookeeper.ZKAssign: regionserver:33129-0x15b44dc4ed70001,
=quorum, baseZNode=/hbase Transitioned node a4803019c76c77f67034c8526188b7f7 from RS_ZK_REQUEST_REGION_SPLIT
to RS_ZK_REQUEST_REGION_SPLIT
> 14:23:53,969 DEBUG [AM.ZK.Worker-pool2-t43] master.AssignmentManager: Handling RS_ZK_REQUEST_REGION_SPLIT,
server=vampire8.internal.salesforce.com,33129,1491508942482, region=a4803019c76c77f67034c8526188b7f7,
current_state={a4803019c76c77f67034c8526188b7f7 state=OPEN, ts=1491598355819, server=vampire8.internal.salesforce.com,33129,1491508942482}
> 14:23:53,970 DEBUG [AM.ZK.Worker-pool2-t43] zookeeper.ZKAssign: master:46160-0x15b44dc4ed70000,
=quorum, baseZNode=/hbase Transitioning a4803019c76c77f67034c8526188b7f7 from RS_ZK_REQUEST_REGION_SPLIT
to RS_ZK_REGION_SPLITTING
> 14:23:53,981 DEBUG [AM.ZK.Worker-pool2-t43] zookeeper.ZKAssign: master:46160-0x15b44dc4ed70000,
=quorum, baseZNode=/hbase Transitioned node a4803019c76c77f67034c8526188b7f7 from RS_ZK_REQUEST_REGION_SPLIT
to RS_ZK_REGION_SPLITTING
> 14:23:53,982 INFO  [AM.ZK.Worker-pool2-t43] master.RegionStates: Transition null to {459beac9a269117b72485e9419144c3b
state=SPLITTING_NEW, ts=1491600233982, server=vampire8.internal.salesforce.com,33129,1491508942482}
> 14:23:53,982 INFO  [AM.ZK.Worker-pool2-t43] master.RegionStates: Transition null to {4a60b9c39fc6501c5b37ad429b3ddcf8
state=SPLITTING_NEW, ts=1491600233982, server=vampire8.internal.salesforce.com,33129,1491508942482}
> 14:23:53,982 INFO  [AM.ZK.Worker-pool2-t43] master.RegionStates: Transition {a4803019c76c77f67034c8526188b7f7
state=OPEN, ts=1491598355819, server=vampire8.internal.salesforce.com,33129,1491508942482}
to {a4803019c76c77f67034c8526188b7f7 state=SPLITTING, ts=1491600233982, server=vampire8.internal.salesforce.com,33129,1491508942482}
> 14:23:53,984 DEBUG [AM.ZK.Worker-pool2-t44] master.AssignmentManager: Handling RS_ZK_REGION_SPLITTING,
server=vampire8.internal.salesforce.com,33129,1491508942482, region=a4803019c76c77f67034c8526188b7f7,
current_state={a4803019c76c77f67034c8526188b7f7 state=SPLITTING, ts=1491600233982, server=vampire8.internal.salesforce.com,33129,1491508942482}
> 14:23:54,073 DEBUG [RS:0;vampire8:33129-splits-1491600233914] regionserver.HRegion: Closing
T,,1491598353059.a4803019c76c77f67034c8526188b7f7.: disabling compactions & flushes
> 14:24:10,253 INFO  [1767285516@qtp-650232729-6] zookeeper.RecoverableZooKeeper: Process
identifier=catalogtracker-on-hconnection-0x5f43c773 connecting to ZooKeeper ensemble=quorum
> 14:24:10,253 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Initiating client
connection, connectString=quorum sessionTimeout=10000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@7428cc3f
> 14:24:10,254 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Starting catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@589dbdc6
> 14:24:10,255 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
> 14:24:10,256 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Socket connection established to localhost/127.0.0.1:2181, initiating session
> 14:24:10,256 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory:
Accepted socket connection from /127.0.0.1:42943
> 14:24:10,256 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer:
Client attempting to establish new session at /127.0.0.1:42943
> 14:24:10,274 INFO  [ZooKeeperWatcher and Master delayed closing for connection hconnection-0x593ae536]
client.HConnectionManager$HConnectionImplementation$DelayedClosing: ZooKeeperWatcher and Master
delayed closing for connection hconnection-0x593ae536 exiting
> 14:24:10,278 INFO  [SyncThread:0] server.ZooKeeperServer: Established session 0x15b44dc4ed7019c
with negotiated timeout 10000 for client /127.0.0.1:42943
> 14:24:10,278 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15b44dc4ed7019c,
negotiated timeout = 10000
> 14:24:10,285 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Stopping catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@589dbdc6
> 14:24:10,286 INFO  [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed
session termination for sessionid: 0x15b44dc4ed7019c
> 14:24:10,294 INFO  [ZooKeeperWatcher and Master delayed closing for connection hconnection-0x35f145cf]
client.HConnectionManager$HConnectionImplementation$DelayedClosing: ZooKeeperWatcher and Master
delayed closing for connection hconnection-0x35f145cf exiting
> 14:24:10,294 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Session: 0x15b44dc4ed7019c
closed
> 14:24:10,294 INFO  [1767285516@qtp-650232729-6-EventThread] zookeeper.ClientCnxn: EventThread
shut down
> 14:24:10,294 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
Closed socket connection for client /127.0.0.1:42943 which had sessionid 0x15b44dc4ed7019c
> 14:24:10,296 INFO  [1767285516@qtp-650232729-6] zookeeper.RecoverableZooKeeper: Process
identifier=catalogtracker-on-hconnection-0x5f43c773 connecting to ZooKeeper ensemble=quorum
> 14:24:10,296 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Initiating client
connection, connectString=quorum sessionTimeout=10000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@46ae65ed
> 14:24:10,297 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Starting catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@822db1f
> 14:24:10,299 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
> 14:24:10,300 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Socket connection established to localhost/127.0.0.1:2181, initiating session
> 14:24:10,300 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory:
Accepted socket connection from /127.0.0.1:42945
> 14:24:10,300 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer:
Client attempting to establish new session at /127.0.0.1:42945
> 14:24:10,306 INFO  [SyncThread:0] server.ZooKeeperServer: Established session 0x15b44dc4ed7019d
with negotiated timeout 10000 for client /127.0.0.1:42945
> 14:24:10,306 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15b44dc4ed7019d,
negotiated timeout = 10000
> 14:24:10,332 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Stopping catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@822db1f
> 14:24:10,333 INFO  [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed
session termination for sessionid: 0x15b44dc4ed7019d
> 14:24:10,339 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Session: 0x15b44dc4ed7019d
closed
> 14:24:10,339 INFO  [1767285516@qtp-650232729-6-EventThread] zookeeper.ClientCnxn: EventThread
shut down
> 14:24:10,339 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
Closed socket connection for client /127.0.0.1:42945 which had sessionid 0x15b44dc4ed7019d
> 14:24:21,259 INFO  [ZooKeeperWatcher and Master delayed closing for connection hconnection-0x4e8705b6]
client.HConnectionManager$HConnectionImplementation$DelayedClosing: ZooKeeperWatcher and Master
delayed closing for connection hconnection-0x4e8705b6 exiting
> 14:24:21,387 INFO  [ZooKeeperWatcher and Master delayed closing for connection hconnection-0x6e0b2ae5]
client.HConnectionManager$HConnectionImplementation$DelayedClosing: ZooKeeperWatcher and Master
delayed closing for connection hconnection-0x6e0b2ae5 exiting
> 14:24:25,004 INFO  [1767285516@qtp-650232729-6] zookeeper.RecoverableZooKeeper: Process
identifier=catalogtracker-on-hconnection-0x5f43c773 connecting to ZooKeeper ensemble=quorum
> 14:24:25,004 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Initiating client
connection, connectString=quorum sessionTimeout=10000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@78853f1
> 14:24:25,007 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Starting catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@61ed2151
> 14:24:25,009 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
> 14:24:25,010 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Socket connection established to localhost/127.0.0.1:2181, initiating session
> 14:24:25,010 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory:
Accepted socket connection from /127.0.0.1:42947
> 14:24:25,010 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer:
Client attempting to establish new session at /127.0.0.1:42947
> 14:24:25,032 INFO  [SyncThread:0] server.ZooKeeperServer: Established session 0x15b44dc4ed7019e
with negotiated timeout 10000 for client /127.0.0.1:42947
> 14:24:25,033 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15b44dc4ed7019e,
negotiated timeout = 10000
> 14:24:25,040 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Stopping catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@61ed2151
> 14:24:25,041 INFO  [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed
session termination for sessionid: 0x15b44dc4ed7019e
> 14:24:25,048 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Session: 0x15b44dc4ed7019e
closed
> 14:24:25,049 INFO  [1767285516@qtp-650232729-6-EventThread] zookeeper.ClientCnxn: EventThread
shut down
> 14:24:25,049 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
Closed socket connection for client /127.0.0.1:42947 which had sessionid 0x15b44dc4ed7019e
> 14:24:25,051 INFO  [1767285516@qtp-650232729-6] zookeeper.RecoverableZooKeeper: Process
identifier=catalogtracker-on-hconnection-0x5f43c773 connecting to ZooKeeper ensemble=quorum
> 14:24:25,051 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Initiating client
connection, connectString=quorum sessionTimeout=10000 watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@17d7ae7d
> 14:24:25,051 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Starting catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@4c6ddd60
> 14:24:25,053 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate
using SASL (unknown error)
> 14:24:25,054 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Socket connection established to localhost/127.0.0.1:2181, initiating session
> 14:24:25,054 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory:
Accepted socket connection from /127.0.0.1:42949
> 14:24:25,054 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer:
Client attempting to establish new session at /127.0.0.1:42949
> 14:24:25,060 INFO  [SyncThread:0] server.ZooKeeperServer: Established session 0x15b44dc4ed7019f
with negotiated timeout 10000 for client /127.0.0.1:42949
> 14:24:25,061 INFO  [1767285516@qtp-650232729-6-SendThread(quorum)] zookeeper.ClientCnxn:
Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15b44dc4ed7019f,
negotiated timeout = 10000
> 14:24:25,086 DEBUG [1767285516@qtp-650232729-6] catalog.CatalogTracker: Stopping catalog
tracker org.apache.hadoop.hbase.catalog.CatalogTracker@4c6ddd60
> 14:24:25,087 INFO  [ProcessThread(sid:0 cport:-1):] server.PrepRequestProcessor: Processed
session termination for sessionid: 0x15b44dc4ed7019f
> 14:24:25,094 INFO  [1767285516@qtp-650232729-6] zookeeper.ZooKeeper: Session: 0x15b44dc4ed7019f
closed
> 14:24:25,094 INFO  [1767285516@qtp-650232729-6-EventThread] zookeeper.ClientCnxn: EventThread
shut down
> 14:24:25,094 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn:
Closed socket connection for client /127.0.0.1:42949 which had sessionid 0x15b44dc4ed7019f
> 14:24:29,928 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600190310, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,928 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600190311, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,929 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600190312, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,929 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-2:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600190629, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,930 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-2:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600190630, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,930 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-2:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600190631, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,931 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status htable-pool715-t1:
status=Servicing call from null:0: Multi, state=RUNNING, startTime=1491600190645, completionTime=-1,
rpcMethod=Multi appears to have been leaked
> 14:24:29,931 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-2:
status=Servicing call from null:0: ExecService, state=RUNNING, startTime=1491600190647, completionTime=-1,
rpcMethod=ExecService appears to have been leaked
> 14:24:29,931 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600201402, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,932 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600201403, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,932 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600201404, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,933 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-3:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600205023, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,933 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-3:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600205024, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,933 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-3:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600205025, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,934 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status htable-pool717-t1:
status=Servicing call from null:0: Multi, state=RUNNING, startTime=1491600205038, completionTime=-1,
rpcMethod=Multi appears to have been leaked
> 14:24:29,934 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-3:
status=Servicing call from null:0: ExecService, state=RUNNING, startTime=1491600205039, completionTime=-1,
rpcMethod=ExecService appears to have been leaked
> 14:24:29,935 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600212567, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,935 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600212594, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,936 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status RS:0;vampire8:33129-smallCompactions-1491509020238:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600212596, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,937 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-0:
status=Servicing call from null:0: Get, state=RUNNING, startTime=1491600212907, completionTime=-1,
rpcMethod=Get appears to have been leaked
> 14:24:29,937 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-0:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600212908, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,938 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-0:
status=Servicing call from null:0: Scan, state=RUNNING, startTime=1491600212909, completionTime=-1,
rpcMethod=Scan appears to have been leaked
> 14:24:29,938 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status htable-pool719-t1:
status=Servicing call from null:0: Multi, state=RUNNING, startTime=1491600212924, completionTime=-1,
rpcMethod=Multi appears to have been leaked
> 14:24:29,938 WARN  [1767285516@qtp-650232729-6] monitoring.TaskMonitor: Status phoenix-update-statistics-0:
status=Servicing call from null:0: ExecService, state=RUNNING, startTime=1491600212926, completionTime=-1,
rpcMethod=ExecService appears to have been leaked
> 14:24:32,519 INFO  [ZooKeeperWatcher and Master delayed closing for connection hconnection-0x572b7d6e]
client.HConnectionManager$HConnectionImplementation$DelayedClosing: ZooKeeperWatcher and Master
delayed closing for connection hconnection-0x572b7d6e exiting
> 14:24:32,550 INFO  [ZooKeeperWatcher and Master delayed closing for connection hconnection-0x4e1d5377]
client.HConnectionManager$HConnectionImplementation$DelayedClosing: ZooKeeperWatcher and Master
delayed closing for connection hconnection-0x4e1d5377 exiting
> 14:24:54,087 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9]
zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x1b84d026 connecting to ZooKeeper
ensemble=quorum
> 14:24:54,087 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9]
zookeeper.ZooKeeper: Initiating client connection, connectString=quorum sessionTimeout=10000
watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@2f63115d
> 14:24:54,090 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9-SendThread(quorum)]
zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not
attempt to authenticate using SASL (unknown error)
> 14:24:54,090 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory:
Accepted socket connection from /127.0.0.1:42953
> 14:24:54,091 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9-SendThread(quorum)]
zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating
session
> 14:24:54,091 INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.ZooKeeperServer:
Client attempting to establish new session at /127.0.0.1:42953
> 14:24:54,121 INFO  [SyncThread:0] server.ZooKeeperServer: Established session 0x15b44dc4ed701a0
with negotiated timeout 10000 for client /127.0.0.1:42953
> 14:24:54,122 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9-SendThread(quorum)]
zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid
= 0x15b44dc4ed701a0, negotiated timeout = 10000
> 14:25:47,574 WARN  [B.DefaultRpcServer.handler=3,queue=0,port=33129] ipc.RpcServer: (responseTooSlow):
{"processingtimems":60118,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"10.9.99.58:59706","param":"region=
T,,1491598353059.a4803019c76c77f67034c8526188b7f7., for 1 actions and 1st row key=00Dxx0000001gESa04500xx000003DGPa\\x7F\\xFF\\xFE\\xB7H8\\x8F\\xCF017212684dRG
  ","starttimems":1491600287454,"queuetimems":0,"class":"HRegionServer","responsesize":1511878,"method":"Multi"}
> 14:25:47,588 WARN  [B.DefaultRpcServer.handler=3,queue=0,port=33129] ipc.RpcServer: RpcServer.respondercallId:
758 service: ClientService methodName: Multi size: 2.0 M connection: 10.9.99.58:59706: output
error
> 14:25:47,589 WARN  [B.DefaultRpcServer.handler=3,queue=0,port=33129] ipc.RpcServer: B.DefaultRpcServer.handler=3,queue=0,port=33129:
caught a ClosedChannelException, this means that the server vampire8.internal.salesforce.com/10.9.99.58:33129
was processing a request but the client went away. The error message was: null
> 14:25:54,224 WARN  [CachedHtables-pool325-t1] ipc.RpcServer: (responseTooSlow): {"processingtimems":60075,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"null:0","param":"region=
T,,1491598353059.a4803019c76c77f67034c8526188b7f7., for 1 actions and 1st row key=\\x00\\x0000Dxx0000001gESa04\\x80\\x00\\x01H\\xB7\\xC7p0500xx000003DGPa017212684dRG
  ","starttimems":1491600294147,"queuetimems":13,"class":"HRegionServer","responsesize":2254879,"method":"Multi"}
> 14:25:54,371 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9]
client.AsyncProcess: #1155, waiting for some tasks to finish. Expected max=0, tasksSent=2,
tasksDone=1, currentTasksDone=1, retries=1 hasError=false, tableName=null
> 14:25:54,371 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9]
client.AsyncProcess: Left over 1 task(s) are processed on server(s): [vampire8.internal.salesforce.com,33129,1491508942482]
> 14:25:54,371 INFO  [vampire8.internal.salesforce.com,33129,1491508942482-index-writer--pool8-t9]
client.AsyncProcess: Regions against which left over task(s) are processed: [T,,1491598353059.a4803019c76c77f67034c8526188b7f7.]
> 14:26:47,785 WARN  [B.DefaultRpcServer.handler=27,queue=0,port=33129] ipc.RpcServer:
(responseTooSlow): {"processingtimems":60054,"call":"Multi(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest)","client":"10.9.99.58:59708","param":"region=
T,,1491598353059.a4803019c76c77f67034c8526188b7f7., for 1 actions and 1st row key=00Dxx0000001gESa04500xx000003DGPa\\x7F\\xFF\\xFE\\xB7H8\\x8F\\xCF017212684dRG
  ","starttimems":1491600347729,"queuetimems":0,"class":"HRegionServer","responsesize":1511878,"method":"Multi"}
> 14:26:47,788 WARN  [B.DefaultRpcServer.handler=27,queue=0,port=33129] ipc.RpcServer:
RpcServer.respondercallId: 761 service: ClientService methodName: Multi size: 2.0 M connection:
10.9.99.58:59708: output error
> {noformat}
> {noformat}
> java.sql.SQLException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
GC overhead limit exceeded
> 	at org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:270)
> 	at org.apache.phoenix.index.IndexMetaDataCacheClient.addIndexMetadataCache(IndexMetaDataCacheClient.java:78)
> 	at org.apache.phoenix.execute.MutationState.setMetaDataOnMutations(MutationState.java:1214)
> 	at org.apache.phoenix.execute.MutationState.send(MutationState.java:1044)
> 	at org.apache.phoenix.execute.MutationState.send(MutationState.java:1463)
> 	at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1295)
> 	at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:531)
> 	at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:528)
> 	at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> 	at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:528)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message