kylin-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ShaoFeng Shi <shaofeng...@apache.org>
Subject Re: Build sample cube with Spark on Kylin 2.0
Date Mon, 06 Mar 2017 04:40:45 GMT
Glad to hear this; I'm adding a doc on it, feedbacks from users like you
are welcomed!

2017-03-06 10:27 GMT+08:00 Hoang Le Trung <hoangletrung@orenj.com>:

> Hi ShaoFeng,
>
>
>
> It working now when I copy new hbase-site.xml to Hadoop conf folder
>
>
>
> Thanks!
>
> Best  Regards,
>
> *From:* ShaoFeng Shi [mailto:shaofengshi@apache.org]
> *Sent:* Monday, March 06, 2017 9:16 AM
> *To:* user
> *Subject:* Re: Build sample cube with Spark on Kylin 2.0
>
>
>
> Hi Hoang,
>
>
>
> Is this a new exception after you putting hbase-site.xml to Hadoop conf
> folder? What's your cluster release version?
>
>
>
> 2017-03-06 9:44 GMT+08:00 Hoang Le Trung <hoangletrung@orenj.com>:
>
> I just remove Warn about zookeeper when I install zookeeper server on this
> node,
>
> But I have face other issue.
>
> The log shows:
>
>
>
> 17/03/05 20:37:30 ERROR persistence.ResourceStore: Create new store
> instance failed
>
> java.lang.reflect.InvocationTargetException
>
>                     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                     at sun.reflect.NativeConstructorAccessorImpl.
> newInstance(NativeConstructorAccessorImpl.java:62)
>
>                     at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                     at java.lang.reflect.Constructor.newInstance
> (Constructor.java:423)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                     at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                     at org.apache.kylin.cube.CubeManager.
> loadAllCubeInstance(CubeManager.java:731)
>
>                     at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                     at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                     at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                     at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                     at org.apache.kylin.common.util.SparkEntry.main
> (SparkEntry.java:44)
>
>                     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                     at java.lang.reflect.Method.invoke(Method.java:498)
>
>                     at org.apache.spark.deploy.
> SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(
> SparkSubmit.scala:731)
>
>                     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                     at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                     at org.apache.spark.deploy.SparkSubmit
> $.main(SparkSubmit.scala:121)
>
>                     at org.apache.spark.deploy.SparkSubmit.main(
> SparkSubmit.scala)
>
> Caused by: java.lang.IllegalArgumentException: File not exist by '
> kylin_metadata@hbase': /usr/apache-kylin/kylin_metadata@hbase
>
>                     at org.apache.kylin.common.
> persistence.FileResourceStore.<init>(FileResourceStore.java:49)
>
>                     ... 22 more
>
> 17/03/05 20:37:30 ERROR persistence.ResourceStore: Create new store
> instance failed
>
> java.lang.reflect.InvocationTargetException
>
>                     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                     at sun.reflect.NativeConstructorAccessorImpl.
> newInstance(NativeConstructorAccessorImpl.java:62)
>
>                     at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                     at java.lang.reflect.Constructor.newInstance
> (Constructor.java:423)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                     at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                     at org.apache.kylin.cube.CubeManager.
> loadAllCubeInstance(CubeManager.java:731)
>
>                     at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                     at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                     at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                     at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                     at org.apache.kylin.common.util.SparkEntry.main
> (SparkEntry.java:44)
>
>                     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                     at java.lang.reflect.Method.invoke(Method.java:498)
>
>                     at org.apache.spark.deploy.
> SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(
> SparkSubmit.scala:731)
>
>                     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                     at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                     at org.apache.spark.deploy.SparkSubmit
> $.main(SparkSubmit.scala:121)
>
>                     at org.apache.spark.deploy.SparkSubmit.main(
> SparkSubmit.scala)
>
> Caused by: java.lang.RuntimeException: java.lang.NullPointerException
>
>                     at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:208)
>
>                     at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                     at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                     at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                     at org.apache.hadoop.hbase.
> client.ClientScanner.<init>(ClientScanner.java:162)
>
>                     at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                     at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                     at org.apache.hadoop.hbase.
> MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
>
>                     at org.apache.hadoop.hbase.
> client.HBaseAdmin.tableExists(HBaseAdmin.java:405)
>
>                     at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                     at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                     at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                     at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                     ... 22 more
>
> Caused by: java.lang.NullPointerException
>
>                     at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.
> getMetaReplicaNodes(ZooKeeperWatcher.java:395)
>
>                     at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:562)
>
>                     at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                     at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                     at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                     at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                     at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                     at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                     at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                     ... 34 more
>
> Exception in thread "main" java.lang.RuntimeException: error execute
> org.apache.kylin.engine.spark.SparkCubingByLayer
>
>                     at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:42)
>
>                     at org.apache.kylin.common.util.SparkEntry.main
> (SparkEntry.java:44)
>
>                     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                     at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                     at java.lang.reflect.Method.invoke(Method.java:498)
>
>                     at org.apache.spark.deploy.
> SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(
> SparkSubmit.scala:731)
>
>                     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                     at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                     at org.apache.spark.deploy.SparkSubmit
> $.main(SparkSubmit.scala:121)
>
>                     at org.apache.spark.deploy.SparkSubmit.main(
> SparkSubmit.scala)
>
> Caused by: java.lang.IllegalArgumentException: Failed to find metadata
> store by url: kylin_metadata@hbase
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:99)
>
>                     at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                     at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                     at org.apache.kylin.cube.CubeManager.
> loadAllCubeInstance(CubeManager.java:731)
>
>                     at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                     at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                     at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                     at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                     ... 10 more
>
> 17/03/05 20:37:30 INFO client.ConnectionManager$HConnectionImplementation:
> Closing zookeeper sessionid=0x45a925fccfb00ce
>
>
>
>
>
> Thanks!
>
> Best  Regards,
>
> *From:* Li Yang [mailto:liyang@apache.org]
> *Sent:* Saturday, March 04, 2017 7:13 PM
> *To:* user@kylin.apache.org
> *Subject:* Re: Build sample cube with Spark on Kylin 2.0
>
>
>
> The log shows -- inside spark executor, hbase looked for zookeeper at
> "localhost" and failed to connect. I think the hbase-site.xml in the
> HADOOP_CONF_DIR (which passed to spark) contains an invalid zookeeper
> address.
>
>
>
>
>
> 17/03/02 00:29:33 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:33 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:33 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:33 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper getData
> failed after 4 attempts
>
> 17/03/02 00:29:33 WARN zookeeper.ZKUtil: hconnection-0x6c4e486e0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode
> /hbase/meta-region-server
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>
>
> On Thu, Mar 2, 2017 at 2:10 PM, Hoang Le Trung <hoangletrung@orenj.com>
> wrote:
>
> Hi team,
>
>
>
> I have problem when using Spark to build sample cube on Kylin 2.0
>
>
>
> Environment: Hortonwork Data Platform  2.5.3
>
>
>
> Spark: 1.6.2 bundle with HDP
>
>
>
> I have 3 node install zookeeper, other node install zookeeper client
>
>
>
> Kylin install in one of zookeeper client nodes
>
>
>
> Then
>
> I follow this docs to run Spark Cubing Engine
> http://kylin.apache.org/blog/2017/02/25/v2.0.0-beta-ready/
>
>
>
> But I stuck on this step :  *#7 Step Name: Build Cube with Spark*
>
>
>
> Below is *error logs:*
>
> OS command error exit with 1 -- export HADOOP_CONF_DIR=/etc/hadoop/conf
> && /usr/apache-kylin/spark/bin/spark-submit --class
> org.apache.kylin.common.util.SparkEntry  --conf
> spark.executor.instances=3  --conf spark.yarn.queue=default  --conf
> spark.yarn.am.extraJavaOptions=-Dhdp.version=current  --conf
> spark.history.fs.logDirectory=hdfs:///kylin/spark-history  --conf
> spark.driver.extraJavaOptions=-Dhdp.version=current  --conf
> spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
> --conf spark.master=yarn  --conf spark.executor.extraJavaOptions=-Dhdp.version=current
> --conf spark.executor.memory=1G  --conf spark.eventLog.dir=hdfs:///kylin/spark-history
> --conf spark.executor.cores=2  --conf spark.submit.deployMode=cluster
> --files /etc/hbase/2.5.3.0-37/0/hbase-site.xml --jars
> /usr/apache-kylin/spark/lib/spark-assembly-1.6.3-hadoop2.
> 6.0.jar,/usr/hdp/2.5.3.0-37/hbase/lib/htrace-core-3.1.0-
> incubating.jar,/usr/hdp/2.5.3.0-37/hbase/lib/hbase-client-1.
> 1.2.2.5.3.0-37.jar,/usr/hdp/2.5.3.0-37/hbase/lib/hbase-
> common-1.1.2.2.5.3.0-37.jar,/usr/hdp/2.5.3.0-37/hbase/lib/
> hbase-protocol-1.1.2.2.5.3.0-37.jar,/usr/hdp/2.5.3.0-37/
> hbase/lib/metrics-core-2.2.0.jar,/usr/hdp/2.5.3.0-37/hbase/lib/guava-12.0.1.jar,
> /usr/apache-kylin/lib/kylin-job-2.0.0-SNAPSHOT.jar -className
> org.apache.kylin.engine.spark.SparkCubingByLayer -hiveTable
> kylin_intermediate_kylin_sales_cube_bcf62d3f_8f0b_46bf_a9c5_6fac9862b892
> -output hdfs:///kylin/kylin_metadata/kylin-f2d1e3ec-e8a5-47a1-9290-
> fe12d2289857/kylin_sales_cube/cuboid/ -segmentId bcf62d3f-8f0b-46bf-a9c5-6fac9862b892
> -confPath /usr/apache-kylin/conf -cubename kylin_sales_cube
>
> SparkEntry args:-className org.apache.kylin.engine.spark.SparkCubingByLayer
> -hiveTable kylin_intermediate_kylin_sales_cube_bcf62d3f_8f0b_46bf_a9c5_6fac9862b892
> -output hdfs:///kylin/kylin_metadata/kylin-f2d1e3ec-e8a5-47a1-9290-
> fe12d2289857/kylin_sales_cube/cuboid/ -segmentId bcf62d3f-8f0b-46bf-a9c5-6fac9862b892
> -confPath /usr/apache-kylin/conf -cubename kylin_sales_cube
>
> Abstract Application args:-hiveTable kylin_intermediate_kylin_
> sales_cube_bcf62d3f_8f0b_46bf_a9c5_6fac9862b892 -output
> hdfs:///kylin/kylin_metadata/kylin-f2d1e3ec-e8a5-47a1-9290-
> fe12d2289857/kylin_sales_cube/cuboid/ -segmentId bcf62d3f-8f0b-46bf-a9c5-6fac9862b892
> -confPath /usr/apache-kylin/conf -cubename kylin_sales_cube
>
> 17/03/02 00:28:10 INFO spark.SparkContext: Running Spark version 1.6.3
>
> 17/03/02 00:28:11 INFO spark.SecurityManager: Changing view acls to: root
>
> 17/03/02 00:28:11 INFO spark.SecurityManager: Changing modify acls to: root
>
> 17/03/02 00:28:11 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users with view permissions:
> Set(root); users with modify permissions: Set(root)
>
> 17/03/02 00:28:14 WARN internal.ThreadLocalRandom: Failed to generate a
> seed from SecureRandom within 3 seconds. Not enough entrophy?
>
> 17/03/02 00:28:14 INFO util.Utils: Successfully started service
> 'sparkDriver' on port 42228.
>
> 17/03/02 00:28:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
>
> 17/03/02 00:28:15 INFO Remoting: Starting remoting
>
> 17/03/02 00:28:15 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkDriverActorSystem@192.168.1.110:56302]
>
> 17/03/02 00:28:15 INFO util.Utils: Successfully started service
> 'sparkDriverActorSystem' on port 56302.
>
> 17/03/02 00:28:15 INFO spark.SparkEnv: Registering MapOutputTracker
>
> 17/03/02 00:28:15 INFO spark.SparkEnv: Registering BlockManagerMaster
>
> 17/03/02 00:28:15 INFO storage.DiskBlockManager: Created local directory
> at /tmp/blockmgr-f8bf5a50-ba6d-43ee-8050-cacc02fb558a
>
> 17/03/02 00:28:15 INFO storage.MemoryStore: MemoryStore started with
> capacity 511.1 MB
>
> 17/03/02 00:28:15 INFO spark.SparkEnv: Registering OutputCommitCoordinator
>
> 17/03/02 00:28:15 INFO server.Server: jetty-8.y.z-SNAPSHOT
>
> 17/03/02 00:28:15 INFO server.AbstractConnector: Started
> SelectChannelConnector@0.0.0.0:4040
>
> 17/03/02 00:28:15 INFO util.Utils: Successfully started service 'SparkUI'
> on port 4040.
>
> 17/03/02 00:28:15 INFO ui.SparkUI: Started SparkUI at
> http://192.168.1.110:4040
>
> 17/03/02 00:28:15 INFO spark.HttpFileServer: HTTP File server directory is
> /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/httpd-
> 04d9ef4d-e90a-4bb9-a268-67ccdfbd07dd
>
> 17/03/02 00:28:15 INFO spark.HttpServer: Starting HTTP Server
>
> 17/03/02 00:28:15 INFO server.Server: jetty-8.y.z-SNAPSHOT
>
> 17/03/02 00:28:15 INFO server.AbstractConnector: Started
> SocketConnector@0.0.0.0:49210
>
> 17/03/02 00:28:15 INFO util.Utils: Successfully started service 'HTTP file
> server' on port 49210.
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/apache-kylin/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar at
> http://192.168.1.110:49210/jars/spark-assembly-1.6.3-hadoop2.6.0.jar with
> timestamp 1488432496261
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/hdp/2.5.3.0-37/hbase/lib/htrace-core-3.1.0-incubating.jar at
> http://192.168.1.110:49210/jars/htrace-core-3.1.0-incubating.jar with
> timestamp 1488432496265
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/hdp/2.5.3.0-37/hbase/lib/hbase-client-1.1.2.2.5.3.0-37.jar at
> http://192.168.1.110:49210/jars/hbase-client-1.1.2.2.5.3.0-37.jar with
> timestamp 1488432496268
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/hdp/2.5.3.0-37/hbase/lib/hbase-common-1.1.2.2.5.3.0-37.jar at
> http://192.168.1.110:49210/jars/hbase-common-1.1.2.2.5.3.0-37.jar with
> timestamp 1488432496269
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/hdp/2.5.3.0-37/hbase/lib/hbase-protocol-1.1.2.2.5.3.0-37.jar at
> http://192.168.1.110:49210/jars/hbase-protocol-1.1.2.2.5.3.0-37.jar with
> timestamp 1488432496279
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/hdp/2.5.3.0-37/hbase/lib/metrics-core-2.2.0.jar at
> http://192.168.1.110:49210/jars/metrics-core-2.2.0.jar with timestamp
> 1488432496280
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/hdp/2.5.3.0-37/hbase/lib/guava-12.0.1.jar at
> http://192.168.1.110:49210/jars/guava-12.0.1.jar with timestamp
> 1488432496283
>
> 17/03/02 00:28:16 INFO spark.SparkContext: Added JAR
> file:/usr/apache-kylin/lib/kylin-job-2.0.0-SNAPSHOT.jar at
> http://192.168.1.110:49210/jars/kylin-job-2.0.0-SNAPSHOT.jar with
> timestamp 1488432496294
>
> 17/03/02 00:28:16 INFO impl.TimelineClientImpl: Timeline service address:
> http://hdp01.example.local:8188/ws/v1/timeline/
>
> 17/03/02 00:28:16 INFO client.RMProxy: Connecting to ResourceManager at
> hdp01.example.local/192.168.1.105:8050
>
> 17/03/02 00:28:17 INFO yarn.Client: Requesting a new application from
> cluster with 5 NodeManagers
>
> 17/03/02 00:28:17 INFO yarn.Client: Verifying our application has not
> requested more than the maximum memory capability of the cluster (20480 MB
> per container)
>
> 17/03/02 00:28:17 INFO yarn.Client: Will allocate AM container, with 896
> MB memory including 384 MB overhead
>
> 17/03/02 00:28:17 INFO yarn.Client: Setting up container launch context
> for our AM
>
> 17/03/02 00:28:17 INFO yarn.Client: Setting up the launch environment for
> our AM container
>
> 17/03/02 00:28:17 INFO yarn.Client: Preparing resources for our AM
> container
>
> 17/03/02 00:28:17 INFO yarn.Client: Uploading resource
> file:/usr/apache-kylin/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar ->
> hdfs://hdp01.example.local:8020/user/root/.sparkStaging/
> application_1488353611011_0192/spark-assembly-1.6.3-hadoop2.6.0.jar
>
> 17/03/02 00:28:19 INFO yarn.Client: Uploading resource
> file:/etc/hbase/2.5.3.0-37/0/hbase-site.xml -> hdfs://hdp01.example.local:
> 8020/user/root/.sparkStaging/application_1488353611011_0192/hbase-site.xml
>
> 17/03/02 00:28:19 INFO yarn.Client: Uploading resource
> file:/tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/__spark_conf__7834996724445524283.zip
> -> hdfs://hdp01.example.local:8020/user/root/.sparkStaging/
> application_1488353611011_0192/__spark_conf__7834996724445524283.zip
>
> 17/03/02 00:28:19 INFO spark.SecurityManager: Changing view acls to: root
>
> 17/03/02 00:28:19 INFO spark.SecurityManager: Changing modify acls to: root
>
> 17/03/02 00:28:19 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users with view permissions:
> Set(root); users with modify permissions: Set(root)
>
> 17/03/02 00:28:19 INFO yarn.Client: Submitting application 192 to
> ResourceManager
>
> 17/03/02 00:28:19 INFO impl.YarnClientImpl: Submitted application
> application_1488353611011_0192
>
> 17/03/02 00:28:20 INFO yarn.Client: Application report for
> application_1488353611011_0192 (state: ACCEPTED)
>
> 17/03/02 00:28:20 INFO yarn.Client:
>
>                  client token: N/A
>
>                 diagnostics: AM container is launched, waiting for AM
> container to Register with RM
>
>                 ApplicationMaster host: N/A
>
>                 ApplicationMaster RPC port: -1
>
>                 queue: default
>
>                 start time: 1488432499799
>
>                 final status: UNDEFINED
>
>                 tracking URL: http://hdp01.example.local:
> 8088/proxy/application_1488353611011_0192/
>
>                 user: root
>
> 17/03/02 00:28:21 INFO yarn.Client: Application report for
> application_1488353611011_0192 (state: ACCEPTED)
>
> 17/03/02 00:28:22 INFO yarn.Client: Application report for
> application_1488353611011_0192 (state: ACCEPTED)
>
> 17/03/02 00:28:23 INFO yarn.Client: Application report for
> application_1488353611011_0192 (state: ACCEPTED)
>
> 17/03/02 00:28:24 INFO yarn.Client: Application report for
> application_1488353611011_0192 (state: ACCEPTED)
>
> 17/03/02 00:28:25 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
> ApplicationMaster registered as NettyRpcEndpointRef(null)
>
> 17/03/02 00:28:25 INFO cluster.YarnClientSchedulerBackend: Add WebUI
> Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,
> Map(PROXY_HOSTS -> hdp01.example.local, PROXY_URI_BASES ->
> http://hdp01.example.local:8088/proxy/application_1488353611011_0192),
> /proxy/application_1488353611011_0192
>
> 17/03/02 00:28:25 INFO ui.JettyUtils: Adding filter:
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
>
> 17/03/02 00:28:25 INFO yarn.Client: Application report for
> application_1488353611011_0192 (state: RUNNING)
>
> 17/03/02 00:28:25 INFO yarn.Client:
>
>                  client token: N/A
>
>                 diagnostics: N/A
>
>                 ApplicationMaster host: 192.168.1.108
>
>                 ApplicationMaster RPC port: 0
>
>                 queue: default
>
>                 start time: 1488432499799
>
>                 final status: UNDEFINED
>
>                 tracking URL: http://hdp01.example.local:
> 8088/proxy/application_1488353611011_0192/
>
>                 user: root
>
> 17/03/02 00:28:25 INFO cluster.YarnClientSchedulerBackend: Application
> application_1488353611011_0192 has started running.
>
> 17/03/02 00:28:25 INFO util.Utils: Successfully started service
> 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49875.
>
> 17/03/02 00:28:25 INFO netty.NettyBlockTransferService: Server created on
> 49875
>
> 17/03/02 00:28:25 INFO storage.BlockManagerMaster: Trying to register
> BlockManager
>
> 17/03/02 00:28:25 INFO storage.BlockManagerMasterEndpoint: Registering
> block manager 192.168.1.110:49875 with 511.1 MB RAM,
> BlockManagerId(driver, 192.168.1.110, 49875)
>
> 17/03/02 00:28:25 INFO storage.BlockManagerMaster: Registered BlockManager
>
> 17/03/02 00:28:31 INFO cluster.YarnClientSchedulerBackend: Registered
> executor NettyRpcEndpointRef(null) (hdp04.example.local:32910) with ID 2
>
> 17/03/02 00:28:31 INFO storage.BlockManagerMasterEndpoint: Registering
> block manager hdp04.example.local:40965 with 511.1 MB RAM,
> BlockManagerId(2, hdp04.example.local, 40965)
>
> 17/03/02 00:28:31 INFO cluster.YarnClientSchedulerBackend: Registered
> executor NettyRpcEndpointRef(null) (hdp02.example.local:60823) with ID 1
>
> 17/03/02 00:28:31 INFO storage.BlockManagerMasterEndpoint: Registering
> block manager hdp02.example.local:47512 with 511.1 MB RAM,
> BlockManagerId(1, hdp02.example.local, 47512)
>
> 17/03/02 00:28:33 INFO cluster.YarnClientSchedulerBackend: Registered
> executor NettyRpcEndpointRef(null) (hdp07.example.local:60987) with ID 3
>
> 17/03/02 00:28:33 INFO cluster.YarnClientSchedulerBackend:
> SchedulerBackend is ready for scheduling beginning after reached
> minRegisteredResourcesRatio: 0.8
>
> 17/03/02 00:28:33 INFO util.ClassUtil: Adding path /usr/apache-kylin/conf
> to class path
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin_job_conf_inmem.xml
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin_job_conf_inmem.xml
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin_job_conf_inmem.xml at
> http://192.168.1.110:49210/files/kylin_job_conf_inmem.xml with timestamp
> 1488432513764
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin-kafka-consumer.xml
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin-kafka-consumer.xml
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin-kafka-consumer.xml at
> http://192.168.1.110:49210/files/kylin-kafka-consumer.xml with timestamp
> 1488432513772
>
> 17/03/02 00:28:33 INFO storage.BlockManagerMasterEndpoint: Registering
> block manager hdp07.example.local:37831 with 511.1 MB RAM,
> BlockManagerId(3, hdp07.example.local, 37831)
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin-tools-log4j.properties
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin-tools-log4j.properties
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin-tools-log4j.properties at
> http://192.168.1.110:49210/files/kylin-tools-log4j.properties with
> timestamp 1488432513779
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin_job_conf.xml
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin_job_conf.xml
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin_job_conf.xml at http://192.168.1.110:49210/
> files/kylin_job_conf.xml with timestamp 1488432513786
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin-server-log4j.properties
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin-server-log4j.properties
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin-server-log4j.properties at
> http://192.168.1.110:49210/files/kylin-server-log4j.properties with
> timestamp 1488432513793
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin_hive_conf.xml
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin_hive_conf.xml
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin_hive_conf.xml at http://192.168.1.110:49210/
> files/kylin_hive_conf.xml with timestamp 1488432513798
>
> 17/03/02 00:28:33 INFO util.Utils: Copying /usr/apache-kylin/conf/kylin.properties
> to /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/userFiles-
> 5194bfbb-0252-4858-8267-341774d6d179/kylin.properties
>
> 17/03/02 00:28:33 INFO spark.SparkContext: Added file
> /usr/apache-kylin/conf/kylin.properties at http://192.168.1.110:49210/
> files/kylin.properties with timestamp 1488432513803
>
> 17/03/02 00:28:33 INFO common.KylinConfig: Use
> KYLIN_CONF=/usr/apache-kylin/conf
>
> 17/03/02 00:28:33 INFO common.KylinConfig: Initialized a new KylinConfig
> from getInstanceFromEnv : 260727363
>
> 17/03/02 00:28:34 INFO hive.HiveContext: Initializing execution hive,
> version 1.2.1
>
> 17/03/02 00:28:34 INFO client.ClientWrapper: Inspected Hadoop version:
> 2.6.0
>
> 17/03/02 00:28:34 INFO client.ClientWrapper: Loaded
> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
>
> 17/03/02 00:28:35 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
>
> 17/03/02 00:28:35 INFO metastore.ObjectStore: ObjectStore, initialize
> called
>
> 17/03/02 00:28:35 INFO DataNucleus.Persistence: Property
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
>
> 17/03/02 00:28:35 INFO DataNucleus.Persistence: Property
> datanucleus.cache.level2 unknown - will be ignored
>
> 17/03/02 00:28:37 INFO metastore.ObjectStore: Setting MetaStore object pin
> classes with hive.metastore.cache.pinobjtypes="Table,
> StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
>
> 17/03/02 00:28:38 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
>
> 17/03/02 00:28:38 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only" so does not have its own datastore table.
>
> 17/03/02 00:28:39 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
>
> 17/03/02 00:28:39 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only" so does not have its own datastore table.
>
> 17/03/02 00:28:39 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
>
> 17/03/02 00:28:39 INFO metastore.ObjectStore: Initialized ObjectStore
>
> 17/03/02 00:28:39 WARN metastore.ObjectStore: Version information not
> found in metastore. hive.metastore.schema.verification is not enabled so
> recording the schema version 1.2.0
>
> 17/03/02 00:28:39 WARN metastore.ObjectStore: Failed to get database
> default, returning NoSuchObjectException
>
> 17/03/02 00:28:39 INFO metastore.HiveMetaStore: Added admin role in
> metastore
>
> 17/03/02 00:28:39 INFO metastore.HiveMetaStore: Added public role in
> metastore
>
> 17/03/02 00:28:39 INFO metastore.HiveMetaStore: No user is added in admin
> role, since config is empty
>
> 17/03/02 00:28:39 INFO metastore.HiveMetaStore: 0: get_all_databases
>
> 17/03/02 00:28:39 INFO HiveMetaStore.audit: ugi=root
> ip=unknown-ip-addr      cmd=get_all_databases
>
> 17/03/02 00:28:39 INFO metastore.HiveMetaStore: 0: get_functions:
> db=default pat=*
>
> 17/03/02 00:28:39 INFO HiveMetaStore.audit: ugi=root
> ip=unknown-ip-addr      cmd=get_functions: db=default pat=*
>
> 17/03/02 00:28:39 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> "embedded-only" so does not have its own datastore table.
>
> 17/03/02 00:28:39 INFO session.SessionState: Created local directory:
> /tmp/9c1771be-4608-4810-8f30-14eefeed50c8_resources
>
> 17/03/02 00:28:39 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/root/9c1771be-4608-4810-8f30-14eefeed50c8
>
> 17/03/02 00:28:39 INFO session.SessionState: Created local directory:
> /tmp/root/9c1771be-4608-4810-8f30-14eefeed50c8
>
> 17/03/02 00:28:39 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/root/9c1771be-4608-4810-8f30-14eefeed50c8/_tmp_space.db
>
> 17/03/02 00:28:40 INFO hive.HiveContext: default warehouse location is
> /user/hive/warehouse
>
> 17/03/02 00:28:40 INFO hive.HiveContext: Initializing
> HiveMetastoreConnection version 1.2.1 using Spark classes.
>
> 17/03/02 00:28:40 INFO client.ClientWrapper: Inspected Hadoop version:
> 2.6.0
>
> 17/03/02 00:28:40 INFO client.ClientWrapper: Loaded
> org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0
>
> 17/03/02 00:28:40 INFO hive.metastore: Trying to connect to metastore with
> URI thrift://hdp02.example.local:9083
>
> 17/03/02 00:28:40 INFO hive.metastore: Connected to metastore.
>
> 17/03/02 00:28:40 INFO session.SessionState: Created local directory:
> /tmp/1c90a0e8-1819-4c96-876e-f2a5ec3d3027_resources
>
> 17/03/02 00:28:40 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/root/1c90a0e8-1819-4c96-876e-f2a5ec3d3027
>
> 17/03/02 00:28:40 INFO session.SessionState: Created local directory:
> /tmp/root/1c90a0e8-1819-4c96-876e-f2a5ec3d3027
>
> 17/03/02 00:28:40 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/root/1c90a0e8-1819-4c96-876e-f2a5ec3d3027/_tmp_space.db
>
> 17/03/02 00:28:41 INFO cube.CubeManager: Initializing CubeManager with
> config kylin_metadata@hbase
>
> 17/03/02 00:28:41 INFO persistence.ResourceStore: Using metadata url
> kylin_metadata@hbase for resource store
>
> 17/03/02 00:28:41 INFO hbase.HBaseConnection: connection is null or
> closed, creating a new one
>
> 17/03/02 00:28:41 INFO zookeeper.RecoverableZooKeeper: Process
> identifier=hconnection-0x6c4e486e connecting to ZooKeeper
> ensemble=localhost:2181
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client environment:host.name
> =hdp06.example.local
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.version=1.8.0_77
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.vendor=Oracle Corporation
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.home=/usr/jdk64/jdk1.8.0_77/jre
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.class.path=/usr/apache-kylin/spark/conf/:/
> usr/apache-kylin/spark/lib/spark-assembly-1.6.3-hadoop2.
> 6.0.jar:/usr/apache-kylin/spark/lib/datanucleus-core-3.
> 2.10.jar:/usr/apache-kylin/spark/lib/datanucleus-rdbms-3.
> 2.9.jar:/usr/apache-kylin/spark/lib/datanucleus-api-jdo-
> 3.2.6.jar:/etc/hadoop/conf/
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.library.path=::/usr/hdp/2.5.3.0-37/hadoop/
> lib/native/Linux-amd64-64:/usr/hdp/2.5.3.0-37/hadoop/lib/
> native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.io.tmpdir=/tmp
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:java.compiler=<NA>
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client environment:os.name
> =Linux
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:os.arch=amd64
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:os.version=2.6.32-642.el6.x86_64
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client environment:user.name
> =root
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:user.home=/root
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Client
> environment:user.dir=/usr/apache-kylin
>
> 17/03/02 00:28:41 INFO zookeeper.ZooKeeper: Initiating client connection,
> connectString=localhost:2181 sessionTimeout=90000
> watcher=org.apache.hadoop.hbase.zookeeper.PendingWatcher@1c65ec63
>
> 17/03/02 00:28:41 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:41 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:41 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/hbaseid
>
> 17/03/02 00:28:42 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:42 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:42 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/hbaseid
>
> 17/03/02 00:28:43 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:43 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:44 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:44 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:44 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/hbaseid
>
> 17/03/02 00:28:45 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:45 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:47 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:47 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:48 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:48 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:49 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:49 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:49 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/hbaseid
>
> 17/03/02 00:28:50 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:50 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:51 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:51 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:52 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:52 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:53 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:53 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:54 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:54 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:55 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:55 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:56 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:56 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:58 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:58 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:58 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/hbaseid
>
> 17/03/02 00:28:58 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists
> failed after 4 attempts
>
> 17/03/02 00:28:58 WARN zookeeper.ZKUtil: hconnection-0x6c4e486e0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to set watcher on znode
> (/hbase/hbaseid)
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.
> java:1045)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(
> ZKUtil.java:418)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKClusterId.
> readClusterIdZNode(ZKClusterId.java:65)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getClusterId(ZooKeeperRegistry.java:105)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.retrieveClusterId(ConnectionManager.java:886)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.<init>(ConnectionManager.java:642)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:238)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:218)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:119)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.get(
> HBaseConnection.java:226)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getConnection(HBaseResourceStore.java:73)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:28:58 ERROR zookeeper.ZooKeeperWatcher:
> hconnection-0x6c4e486e0x0, quorum=localhost:2181, baseZNode=/hbase Received
> unexpected KeeperException, re-throwing exception
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.
> java:1045)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(
> ZKUtil.java:418)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKClusterId.
> readClusterIdZNode(ZKClusterId.java:65)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getClusterId(ZooKeeperRegistry.java:105)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.retrieveClusterId(ConnectionManager.java:886)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.<init>(ConnectionManager.java:642)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:238)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:218)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:119)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.get(
> HBaseConnection.java:226)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getConnection(HBaseResourceStore.java:73)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:28:58 WARN client.ZooKeeperRegistry: Can't retrieve clusterId
> from Zookeeper
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.
> java:1045)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(
> ZKUtil.java:418)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKClusterId.
> readClusterIdZNode(ZKClusterId.java:65)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getClusterId(ZooKeeperRegistry.java:105)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.retrieveClusterId(ConnectionManager.java:886)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.<init>(ConnectionManager.java:642)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:238)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:218)
>
>                 at org.apache.hadoop.hbase.client.ConnectionFactory.
> createConnection(ConnectionFactory.java:119)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.get(
> HBaseConnection.java:226)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> getConnection(HBaseResourceStore.java:73)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:28:59 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:28:59 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:28:59 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase
>
> 17/03/02 00:29:00 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:00 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:00 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase
>
> 17/03/02 00:29:01 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:01 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:02 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:02 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:02 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase
>
> 17/03/02 00:29:03 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:03 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:04 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:04 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:05 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:05 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:06 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:06 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:06 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase
>
> 17/03/02 00:29:07 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:07 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:09 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:09 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:10 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:10 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:11 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:11 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:12 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:12 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:13 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:13 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:14 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:14 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:15 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:15 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:15 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase
>
> 17/03/02 00:29:15 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper
> getChildren failed after 4 attempts
>
> 17/03/02 00:29:15 WARN zookeeper.MetaTableLocator: Got ZK exception
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase
>
> 17/03/02 00:29:16 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:16 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:16 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:17 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:17 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:17 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:18 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:18 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:20 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:20 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:20 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:21 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:21 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:22 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:22 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:23 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:23 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:24 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:24 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:24 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:25 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:25 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:26 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:26 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:27 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:27 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:28 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:28 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:29 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:29 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:31 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:31 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:32 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:32 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:33 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:33 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:33 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:33 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper getData
> failed after 4 attempts
>
> 17/03/02 00:29:33 WARN zookeeper.ZKUtil: hconnection-0x6c4e486e0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode
> /hbase/meta-region-server
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:29:33 ERROR zookeeper.ZooKeeperWatcher:
> hconnection-0x6c4e486e0x0, quorum=localhost:2181, baseZNode=/hbase Received
> unexpected KeeperException, re-throwing exception
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:29:34 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:34 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:34 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:35 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:35 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:35 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:36 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:36 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:37 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:37 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:37 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:38 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:38 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:39 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:39 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:41 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:41 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:42 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:42 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:42 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:43 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:43 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:44 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:44 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:45 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:45 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:46 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:46 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:47 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:47 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:48 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:48 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:49 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:49 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:50 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:50 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:51 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:51 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper getData
> failed after 4 attempts
>
> 17/03/02 00:29:51 WARN zookeeper.ZKUtil: hconnection-0x6c4e486e0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode
> /hbase/meta-region-server
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:29:51 ERROR zookeeper.ZooKeeperWatcher:
> hconnection-0x6c4e486e0x0, quorum=localhost:2181, baseZNode=/hbase Received
> unexpected KeeperException, re-throwing exception
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:29:52 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:52 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:52 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:53 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:53 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:53 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:54 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:54 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:55 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:55 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:55 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:29:56 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:56 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:57 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:57 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:58 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:58 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:59 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:29:59 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:29:59 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:00 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:00 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:01 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:01 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:03 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:03 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:04 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:04 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:05 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:05 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:06 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:06 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:07 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:07 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:08 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:08 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:08 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:08 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper getData
> failed after 4 attempts
>
> 17/03/02 00:30:08 WARN zookeeper.ZKUtil: hconnection-0x6c4e486e0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode
> /hbase/meta-region-server
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:30:08 ERROR zookeeper.ZooKeeperWatcher:
> hconnection-0x6c4e486e0x0, quorum=localhost:2181, baseZNode=/hbase Received
> unexpected KeeperException, re-throwing exception
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:30:09 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:09 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:09 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:10 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:10 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:10 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:11 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:11 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:12 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:12 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:13 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:14 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:14 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:15 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:15 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:16 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:16 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:17 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:17 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:17 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:18 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:18 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:19 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:19 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:20 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:20 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:21 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:21 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:22 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:22 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:23 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:23 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:25 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:25 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:26 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:26 WARN zookeeper.ClientCnxn: Session 0x0 for server null,
> unexpected error, closing socket connection and attempting reconnect
>
> java.net.ConnectException: Connection refused
>
>                 at sun.nio.ch.SocketChannelImpl.checkConnect(Native
> Method)
>
>                 at sun.nio.ch.SocketChannelImpl.finishConnect(
> SocketChannelImpl.java:717)
>
>                 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(
> ClientCnxnSocketNIO.java:361)
>
>                 at org.apache.zookeeper.ClientCnxn$SendThread.run(
> ClientCnxn.java:1081)
>
> 17/03/02 00:30:26 WARN zookeeper.RecoverableZooKeeper: Possibly transient
> ZooKeeper, quorum=localhost:2181, exception=org.apache.
> zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode =
> ConnectionLoss for /hbase/meta-region-server
>
> 17/03/02 00:30:26 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper getData
> failed after 4 attempts
>
> 17/03/02 00:30:26 WARN zookeeper.ZKUtil: hconnection-0x6c4e486e0x0,
> quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode
> /hbase/meta-region-server
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:30:26 ERROR zookeeper.ZooKeeperWatcher:
> hconnection-0x6c4e486e0x0, quorum=localhost:2181, baseZNode=/hbase Received
> unexpected KeeperException, re-throwing exception
>
> org.apache.zookeeper.KeeperException$ConnectionLossException:
> KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:99)
>
>                 at org.apache.zookeeper.KeeperException.create(
> KeeperException.java:51)
>
>                 at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.
> java:1155)
>
>                 at org.apache.hadoop.hbase.zookeeper.
> RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
>
>                 at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(
> ZKUtil.java:622)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionState(MetaTableLocator.java:491)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> getMetaRegionLocation(MetaTableLocator.java:172)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:608)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:589)
>
>                 at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.
> blockUntilAvailable(MetaTableLocator.java:568)
>
>                 at org.apache.hadoop.hbase.client.ZooKeeperRegistry.
> getMetaRegionLocation(ZooKeeperRegistry.java:61)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>
>                 at org.apache.hadoop.hbase.client.ConnectionManager$
> HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:300)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> 17/03/02 00:30:26 ERROR persistence.ResourceStore: Create new store
> instance failed
>
> java.lang.reflect.InvocationTargetException
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> Caused by: java.lang.IllegalArgumentException: File not exist by
> 'kylin_metadata@hbase': /usr/apache-kylin/kylin_metadata@hbase
>
>                 at org.apache.kylin.common.persistence.FileResourceStore.
> <init>(FileResourceStore.java:49)
>
>                 ... 22 more
>
> 17/03/02 00:30:26 ERROR persistence.ResourceStore: Create new store
> instance failed
>
> java.lang.reflect.InvocationTargetException
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>
>                 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingConstructorAccessorI
> mpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>
>                 at java.lang.reflect.Constructor.
> newInstance(Constructor.java:423)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:91)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> Can't get the locations
>
>                 at org.apache.hadoop.hbase.client.
> RpcRetryingCallerWithReadReplicas.getRegionLocations(
> RpcRetryingCallerWithReadReplicas.java:312)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>
>                 at org.apache.hadoop.hbase.client.
> ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>
>                 at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> callWithoutRetries(RpcRetryingCaller.java:200)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.call(
> ClientScanner.java:327)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> nextScanner(ClientScanner.java:302)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.
> initializeScannerInConstruction(ClientScanner.java:167)
>
>                 at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> ClientScanner.java:162)
>
>                 at org.apache.hadoop.hbase.client.HTable.getScanner(
> HTable.java:794)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(
> MetaTableAccessor.java:602)
>
>                 at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(
> MetaTableAccessor.java:366)
>
>                 at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(
> HBaseAdmin.java:405)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> tableExists(HBaseConnection.java:248)
>
>                 at org.apache.kylin.storage.hbase.HBaseConnection.
> createHTableIfNeeded(HBaseConnection.java:270)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.
> createHTableIfNeeded(HBaseResourceStore.java:90)
>
>                 at org.apache.kylin.storage.hbase.HBaseResourceStore.<
> init>(HBaseResourceStore.java:86)
>
>                 ... 22 more
>
> Exception in thread "main" java.lang.RuntimeException: error execute
> org.apache.kylin.engine.spark.SparkCubingByLayer
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:42)
>
>                 at org.apache.kylin.common.util.
> SparkEntry.main(SparkEntry.java:44)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>                 at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:498)
>
>                 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
>
>                 at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:181)
>
>                 at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:206)
>
>                 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:121)
>
>                 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.
> scala)
>
> Caused by: java.lang.IllegalArgumentException: Failed to find metadata
> store by url: kylin_metadata@hbase
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> createResourceStore(ResourceStore.java:99)
>
>                 at org.apache.kylin.common.persistence.ResourceStore.
> getStore(ResourceStore.java:110)
>
>                 at org.apache.kylin.cube.CubeManager.getStore(
> CubeManager.java:811)
>
>                 at org.apache.kylin.cube.CubeManager.loadAllCubeInstance(
> CubeManager.java:731)
>
>                 at org.apache.kylin.cube.CubeManager.<init>(
> CubeManager.java:142)
>
>                 at org.apache.kylin.cube.CubeManager.getInstance(
> CubeManager.java:106)
>
>                 at org.apache.kylin.engine.spark.
> SparkCubingByLayer.execute(SparkCubingByLayer.java:159)
>
>                 at org.apache.kylin.common.util.
> AbstractApplication.execute(AbstractApplication.java:37)
>
>                 ... 10 more
>
> 17/03/02 00:30:26 INFO client.ConnectionManager$HConnectionImplementation:
> Closing zookeeper sessionid=0x0
>
> 17/03/02 00:30:26 INFO spark.SparkContext: Invoking stop() from shutdown
> hook
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/static/sql,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/execution/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/execution,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/SQL,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/metrics/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/api,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/static,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/threadDump,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/executors,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/environment/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/environment,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/rdd,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/storage,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/pool/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/pool,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/stage,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/stages,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/job/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/job,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs/json,null}
>
> 17/03/02 00:30:26 INFO handler.ContextHandler: stopped
> o.s.j.s.ServletContextHandler{/jobs,null}
>
> 17/03/02 00:30:26 INFO ui.SparkUI: Stopped Spark web UI at
> http://192.168.1.110:4040
>
> 17/03/02 00:30:26 INFO cluster.YarnClientSchedulerBackend: Shutting down
> all executors
>
> 17/03/02 00:30:26 INFO cluster.YarnClientSchedulerBackend: Interrupting
> monitor thread
>
> 17/03/02 00:30:26 INFO cluster.YarnClientSchedulerBackend: Asking each
> executor to shut down
>
> 17/03/02 00:30:26 INFO cluster.YarnClientSchedulerBackend: Stopped
>
> 17/03/02 00:30:26 INFO spark.MapOutputTrackerMasterEndpoint:
> MapOutputTrackerMasterEndpoint stopped!
>
> 17/03/02 00:30:26 INFO storage.MemoryStore: MemoryStore cleared
>
> 17/03/02 00:30:26 INFO storage.BlockManager: BlockManager stopped
>
> 17/03/02 00:30:26 INFO storage.BlockManagerMaster: BlockManagerMaster
> stopped
>
> 17/03/02 00:30:26 INFO scheduler.OutputCommitCoordinator$
> OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
>
> 17/03/02 00:30:26 INFO spark.SparkContext: Successfully stopped
> SparkContext
>
> 17/03/02 00:30:26 INFO util.ShutdownHookManager: Shutdown hook called
>
> 17/03/02 00:30:26 INFO util.ShutdownHookManager: Deleting directory
> /tmp/spark-1cb7bdf6-2226-49c6-ac37-3c625f6fa7c6
>
> 17/03/02 00:30:26 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Shutting down remote daemon.
>
> 17/03/02 00:30:26 INFO util.ShutdownHookManager: Deleting directory
> /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d/httpd-
> 04d9ef4d-e90a-4bb9-a268-67ccdfbd07dd
>
> 17/03/02 00:30:26 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remote daemon shut down; proceeding with flushing remote transports.
>
> 17/03/02 00:30:26 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remoting shut down.
>
> 17/03/02 00:30:26 INFO util.ShutdownHookManager: Deleting directory
> /tmp/spark-4c402d5d-1179-4413-9f26-f4b4f44a613d
>
> 17/03/02 00:30:27 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181. Will not attempt to authenticate using
> SASL (unknown error)
>
> 17/03/02 00:30:27 INFO zookeeper.ZooKeeper: Session: 0x0 closed
>
> 17/03/02 00:30:27 INFO zookeeper.ClientCnxn: EventThread shut down
>
>
>
>
>
> So anyone have any solution for this.
>
>
>
> THANKS,
>
>
>
>
> ------------------------------
>
> This e-mail may contain confidential or privileged information. If you
> received this e-mail by mistake, please don't forward it to anyone else,
> please erase it from your device and let me know so I don't do it again.
>
>
>
>
> ------------------------------
>
> This e-mail may contain confidential or privileged information. If you
> received this e-mail by mistake, please don't forward it to anyone else,
> please erase it from your device and let me know so I don't do it again.
>
>
>
>
>
> --
>
> Best regards,
>
>
>
> Shaofeng Shi 史少锋
>
>
>
> ------------------------------
> This e-mail may contain confidential or privileged information. If you
> received this e-mail by mistake, please don't forward it to anyone else,
> please erase it from your device and let me know so I don't do it again.
>



-- 
Best regards,

Shaofeng Shi 史少锋

Mime
View raw message