atlas-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vimal Sharma (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (ATLAS-639) Exception for lineage request
Date Mon, 18 Jul 2016 09:35:20 GMT

     [ https://issues.apache.org/jira/browse/ATLAS-639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vimal Sharma updated ATLAS-639:
-------------------------------
    Description: 
Exception in log for lineage request

Steps to reproduce:
	create table j13 (col4 String);
	create table j14 (col5 String);
	insert into table j13 select * from j14;
	insert into table j14 select * from j13;

Lineage API call i.e http://localhost:21000/api/atlas/lineage/"guid"/inputs/graph is non-responsive.
The following exception is observed in log file:

2016-07-18 14:46:20,841 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
no brokers found when trying to rebalance. (Logging$class:83)
2016-07-18 14:46:20,848 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
Topic for path /brokers/topics/ATLAS_HOOK gets deleted, which should not happen at this time
(Logging$class:83)
2016-07-18 14:48:27,697 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20033,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-07-18 14:48:27,715 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20031,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-07-18 14:48:34,551 ERROR - [main-EventThread:] ~ Background operation retry gave up (CuratorFrameworkImpl:537)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
	at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
	at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:499)
	at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2016-07-18 14:48:34,561 ERROR - [main-EventThread:] ~ Background operation retry gave up (CuratorFrameworkImpl:537)
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
	at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
	at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:499)
	at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50)
	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609)
	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2016-07-18 14:48:34,566 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Exception
causing close of session 0x155fd3803b20032 due to java.nio.channels.AsynchronousCloseException
(NIOServerCnxn:362)
2016-07-18 14:50:50,513 WARN  - [main-EventThread:] ~ Session expired event received (ConnectionState:288)
2016-07-18 14:52:52,577 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20034,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-07-18 14:52:52,589 WARN  - [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:]
~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when trying to rebalance.
(Logging$class:83)
2016-07-18 14:52:52,608 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
no brokers found when trying to rebalance. (Logging$class:83)
2016-07-18 14:52:52,612 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
Topic for path /brokers/topics/ATLAS_HOOK gets deleted, which should not happen at this time
(Logging$class:83)
2016-07-18 14:52:52,635 WARN  - [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:]
~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when trying to rebalance.
(Logging$class:83)
2016-07-18 14:52:52,657 WARN  - [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:]
~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when trying to rebalance.
(Logging$class:83)
2016-07-18 14:54:59,478 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20035,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-07-18 14:54:59,487 WARN  - [Curator-Framework-0:] ~ Connection attempt unsuccessful after
126898 (greater than max timeout of 20000). Resetting connection and trying again with a new
connection. (ConnectionState:191)
2016-07-18 14:55:12,221 ERROR - [ZkClient-EventThread-65-localhost:9026:] ~ Controller 1 epoch
299 initiated state change for partition [ATLAS_ENTITIES,0] from OfflinePartition to OnlinePartition
failed (Logging$class:103)
kafka.common.NoReplicaOnlineException: No replica for partition [ATLAS_ENTITIES,0] is alive.
Live brokers are: [Set()], Assigned replicas are: [List(1)]
	at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
	at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345)
	at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205)
	at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
	at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
	at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
	at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
	at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
	at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70)
	at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335)
	at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166)
	at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
	at kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1175)
	at kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1173)
	at kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1173)
	at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
	at kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1173)
	at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:735)
	at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)

  was:
Exception in log for lineage request

Steps to repro:
	create table j13 (col4 String);
	create table j14 (col5 String);
	insert into table j13 select * from j14;
	insert into table j14 select * from j13;

Query for lineage of any of the above two entities..(also keep querying for any other request
along with this). Lineage request is timed-out and below error messages are observed in the
application log. The below messages are seen continously.(Clean the data dir and manual restart
will fix this).


Environment: Atlas running in embedded mode on local setup.

{noformat}
2016-04-06 11:41:45,112 DEBUG - [qtp1936542270-298 - fecc3aa7-f4cf-4f6d-89a2-d973c05bc06f:]
~ graph commit (GraphTransactionInterceptor:44)
2016-04-06 11:41:45,112 DEBUG - [qtp1936542270-298 - fecc3aa7-f4cf-4f6d-89a2-d973c05bc06f:]
~ graph commit (GraphTransactionInterceptor:44)
2016-04-06 11:41:48,935 INFO  - [main-SendThread(localhost:9026):] ~ Client session timed
out, have not heard from server in 1669ms for sessionid 0x153ea2cd8da002b, closing socket
connection and attempting reconnect (ClientCnxn:1096)
2016-04-06 11:41:48,935 INFO  - [SessionTracker:] ~ Expiring session 0x153ea2cd8da002a, timeout
of 1000ms exceeded (ZooKeeperServer:347)
2016-04-06 11:41:48,935 INFO  - [main-SendThread(localhost:9026):] ~ Client session timed
out, have not heard from server in 1917ms for sessionid 0x153ea2cd8da002a, closing socket
connection and attempting reconnect (ClientCnxn:1096)
2016-04-06 11:41:48,935 INFO  - [SessionTracker:] ~ Expiring session 0x153ea2cd8da002b, timeout
of 1000ms exceeded (ZooKeeperServer:347)
2016-04-06 11:41:48,935 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x153ea2cd8da002b,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-04-06 11:41:48,935 INFO  - [ProcessThread(sid:0 cport:-1)::] ~ Processed session termination
for sessionid: 0x153ea2cd8da002a (PrepRequestProcessor:494)
2016-04-06 11:41:48,936 INFO  - [ProcessThread(sid:0 cport:-1)::] ~ Processed session termination
for sessionid: 0x153ea2cd8da002b (PrepRequestProcessor:494)
2016-04-06 11:41:48,936 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Closed
socket connection for client /127.0.0.1:51657 which had sessionid 0x153ea2cd8da002b (NIOServerCnxn:1007)
2016-04-06 11:41:48,936 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x153ea2cd8da002a,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-04-06 11:41:48,936 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Closed
socket connection for client /127.0.0.1:51652 which had sessionid 0x153ea2cd8da002a (NIOServerCnxn:1007)
2016-04-06 11:41:49,040 INFO  - [main-EventThread:] ~ zookeeper state changed (Disconnected)
(ZkClient:711)
2016-04-06 11:41:49,040 INFO  - [main-EventThread:] ~ zookeeper state changed (Disconnected)
(ZkClient:711)
2016-04-06 11:41:58,894 INFO  - [main-SendThread(localhost:9026):] ~ Opening socket connection
to server localhost/127.0.0.1:9026. Will not attempt to authenticate using SASL (unknown error)
(ClientCnxn:975)
2016-04-06 11:41:58,894 INFO  - [Curator-Framework-0-SendThread(localhost:9026):] ~ Client
session timed out, have not heard from server in 11587ms for sessionid 0x153ea2cd8da0025,
closing socket connection and attempting reconnect (ClientCnxn:1096)
2016-04-06 11:41:58,894 INFO  - [main-SendThread(localhost:9026):] ~ Socket connection established
to localhost/127.0.0.1:9026, initiating session (ClientCnxn:852)
2016-04-06 11:41:58,894 INFO  - [main-SendThread(localhost:9026):] ~ Opening socket connection
to server localhost/127.0.0.1:9026. Will not attempt to authenticate using SASL (unknown error)
(ClientCnxn:975)
2016-04-06 11:41:58,894 INFO  - [main-SendThread(localhost:9026):] ~ Socket connection established
to localhost/127.0.0.1:9026, initiating session (ClientCnxn:852)
2016-04-06 11:41:58,894 INFO  - [SessionTracker:] ~ Expiring session 0x153ea2cd8da0025, timeout
of 10000ms exceeded (ZooKeeperServer:347)
2016-04-06 11:41:58,895 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
EndOfStreamException: Unable to read additional data from client sessionid 0x153ea2cd8da0025,
likely client has closed socket
	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
	at java.lang.Thread.run(Thread.java:745)
2016-04-06 11:41:58,895 INFO  - [elasticsearch[Sinister][scheduler][T#1]:] ~ [Sinister] [gc][old][256][137]
duration [8.8s], collections [1]/[8.9s], total [8.8s]/[3.7m], memory [612.4mb]->[635.5mb]/[910.5mb],
all_pools {[young] [2.7mb]->[4.1mb]/[114.5mb]}{[survivor] [69.2mb]->[9.8mb]/[113.5mb]}{[old]
[540.4mb]->[621.6mb]/[682.5mb]} (jvm:114)
2016-04-06 11:41:58,895 INFO  - [ProcessThread(sid:0 cport:-1)::] ~ Processed session termination
for sessionid: 0x153ea2cd8da0025 (PrepRequestProcessor:494)
2016-04-06 11:41:58,895 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Closed
socket connection for client /127.0.0.1:51608 which had sessionid 0x153ea2cd8da0025 (NIOServerCnxn:1007)
2016-04-06 11:41:58,895 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Accepted
socket connection from /127.0.0.1:51723 (NIOServerCnxnFactory:197)
2016-04-06 11:41:58,895 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Accepted
socket connection from /127.0.0.1:51724 (NIOServerCnxnFactory:197)
2016-04-06 11:41:58,895 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Client
attempting to renew session 0x153ea2cd8da002b at /127.0.0.1:51723 (ZooKeeperServer:861)
2016-04-06 11:41:58,896 INFO  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Invalid
session 0x153ea2cd8da002b for client /127.0.0.1:51723, probably expired (ZooKeeperServer:610)
.
.
.
.
.
.
2016-04-06 11:54:22,116 INFO  - [ZkClient-EventThread-127-localhost:9026:] ~ [atlas_Ayub-Pathans-Macbook-Pro.local-1459922721425-26974d49],
exception during rebalance  (ZookeeperConsumerConnector:76)
org.I0Itec.zkclient.exception.ZkNoNodeException: org.apache.zookeeper.KeeperException$NoNodeException:
KeeperErrorCode = NoNode for /consumers/atlas/ids/atlas_Ayub-Pathans-Macbook-Pro.local-1459922721425-26974d49
	at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
	at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:995)
	at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1090)
	at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1085)
	at kafka.utils.ZkUtils.readData(ZkUtils.scala:518)
	at kafka.consumer.TopicCount$.constructTopicCount(TopicCount.scala:61)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperConsumerConnector.scala:664)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1$$anonfun$apply$mcV$sp$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:636)
	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcV$sp(ZookeeperConsumerConnector.scala:627)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:627)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply(ZookeeperConsumerConnector.scala:627)
	at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:626)
	at kafka.consumer.ZookeeperConsumerConnector$ZKSessionExpireListener.handleNewSession(ZookeeperConsumerConnector.scala:512)
	at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:734)
	at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
for /consumers/atlas/ids/atlas_Ayub-Pathans-Macbook-Pro.local-1459922721425-26974d49
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
	at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
	at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:119)
	at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1094)
	at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1090)
	at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:985)
	... 15 more
{noformat}


> Exception for lineage request
> -----------------------------
>
>                 Key: ATLAS-639
>                 URL: https://issues.apache.org/jira/browse/ATLAS-639
>             Project: Atlas
>          Issue Type: Bug
>    Affects Versions: trunk
>            Reporter: Ayub Khan
>            Assignee: Vimal Sharma
>            Priority: Critical
>             Fix For: trunk
>
>
> Exception in log for lineage request
> Steps to reproduce:
> 	create table j13 (col4 String);
> 	create table j14 (col5 String);
> 	insert into table j13 select * from j14;
> 	insert into table j14 select * from j13;
> Lineage API call i.e http://localhost:21000/api/atlas/lineage/"guid"/inputs/graph is
non-responsive. The following exception is observed in log file:
> 2016-07-18 14:46:20,841 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
no brokers found when trying to rebalance. (Logging$class:83)
> 2016-07-18 14:46:20,848 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
Topic for path /brokers/topics/ATLAS_HOOK gets deleted, which should not happen at this time
(Logging$class:83)
> 2016-07-18 14:48:27,697 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20033,
likely client has closed socket
> 	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> 	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> 	at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:48:27,715 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20031,
likely client has closed socket
> 	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> 	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> 	at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:48:34,551 ERROR - [main-EventThread:] ~ Background operation retry gave
up (CuratorFrameworkImpl:537)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
> 	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 	at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
> 	at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:499)
> 	at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50)
> 	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609)
> 	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-07-18 14:48:34,561 ERROR - [main-EventThread:] ~ Background operation retry gave
up (CuratorFrameworkImpl:537)
> org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
> 	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> 	at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:708)
> 	at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:499)
> 	at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50)
> 	at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609)
> 	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
> 2016-07-18 14:48:34,566 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ Exception
causing close of session 0x155fd3803b20032 due to java.nio.channels.AsynchronousCloseException
(NIOServerCnxn:362)
> 2016-07-18 14:50:50,513 WARN  - [main-EventThread:] ~ Session expired event received
(ConnectionState:288)
> 2016-07-18 14:52:52,577 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20034,
likely client has closed socket
> 	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> 	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> 	at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:52:52,589 WARN  - [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:]
~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when trying to rebalance.
(Logging$class:83)
> 2016-07-18 14:52:52,608 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
no brokers found when trying to rebalance. (Logging$class:83)
> 2016-07-18 14:52:52,612 WARN  - [ZkClient-EventThread-99-localhost:9026:] ~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf],
Topic for path /brokers/topics/ATLAS_HOOK gets deleted, which should not happen at this time
(Logging$class:83)
> 2016-07-18 14:52:52,635 WARN  - [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:]
~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when trying to rebalance.
(Logging$class:83)
> 2016-07-18 14:52:52,657 WARN  - [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf_watcher_executor:]
~ [atlas_hw-f45c89ac3a11.local-1468832155097-fb78fddf], no brokers found when trying to rebalance.
(Logging$class:83)
> 2016-07-18 14:54:59,478 WARN  - [NIOServerCxn.Factory:localhost/127.0.0.1:9026:] ~ caught
end of stream exception (NIOServerCnxn:357)
> EndOfStreamException: Unable to read additional data from client sessionid 0x155fd3803b20035,
likely client has closed socket
> 	at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
> 	at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
> 	at java.lang.Thread.run(Thread.java:745)
> 2016-07-18 14:54:59,487 WARN  - [Curator-Framework-0:] ~ Connection attempt unsuccessful
after 126898 (greater than max timeout of 20000). Resetting connection and trying again with
a new connection. (ConnectionState:191)
> 2016-07-18 14:55:12,221 ERROR - [ZkClient-EventThread-65-localhost:9026:] ~ Controller
1 epoch 299 initiated state change for partition [ATLAS_ENTITIES,0] from OfflinePartition
to OnlinePartition failed (Logging$class:103)
> kafka.common.NoReplicaOnlineException: No replica for partition [ATLAS_ENTITIES,0] is
alive. Live brokers are: [Set()], Assigned replicas are: [List(1)]
> 	at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:75)
> 	at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345)
> 	at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205)
> 	at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
> 	at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
> 	at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> 	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> 	at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> 	at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> 	at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> 	at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> 	at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> 	at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
> 	at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70)
> 	at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:335)
> 	at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:166)
> 	at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
> 	at kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply$mcZ$sp(KafkaController.scala:1175)
> 	at kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1173)
> 	at kafka.controller.KafkaController$SessionExpirationListener$$anonfun$handleNewSession$1.apply(KafkaController.scala:1173)
> 	at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
> 	at kafka.controller.KafkaController$SessionExpirationListener.handleNewSession(KafkaController.scala:1173)
> 	at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:735)
> 	at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message