hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Spark 2.1.1 + Hadoop 2.7.3 + HBase 1.2.6 RetriesExhaustedException
Date Sun, 02 Jul 2017 15:39:54 GMT
Have you noticed the following ?

> Caused by: java.io.IOException: com.google.protobuf.ServiceException:
> java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gauge

Looks like metrics-core jar was not on the classpath.

On Sun, Jul 2, 2017 at 8:21 AM, George Papadimitriou <
georgepapajim@gmail.com> wrote:

> Hello,
>
> On a signle machine I have installed hadoop-2.7.3, hbase-1.2.6 and spark
> 2.1.1.
> I'm trying to connect to HBase from Spark using newAPIHadoopRDD(), but I
> always receive this exception: "org.apache.hadoop.hbase.client.
> RetriesExhaustedException".
> I have added hbase/conf at HADOOP_CLASSPATH and spark.driver.extraClassPath
> but nothing changed.
> Additionally regionserver log and zookeeper log, don't show any error.
>
> Here is an example trace from spark:
>
> 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:java.library.path=/usr/java/packages/lib/amd64:/
> > usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/
> > x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:java.io.tmpdir=/tmp
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:java.compiler=<NA>
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:os.name=Linux
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:os.arch=amd64
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:os.version=4.10.0-26-generic
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:user.name=user
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:user.home=/home/user
> > 2017-07-02 13:11:30,577 INFO  [main] zookeeper.ZooKeeper: Client
> > environment:user.dir=/home/user/Desktop
> > 2017-07-02 13:11:30,578 INFO  [main] zookeeper.ZooKeeper: Initiating
> > client connection, connectString=Ubuntu-17:2181 sessionTimeout=90000
> > watcher=hconnection-0x6403e24c0x0, quorum=Ubuntu-17:2181,
> > baseZNode=/hbase-unsecure
> > 2017-07-02 13:11:30,600 INFO  [main-SendThread(Ubuntu-17:2181)]
> > zookeeper.ClientCnxn: Opening socket connection to server Ubuntu-17/
> > 127.0.1.1:2181. Will not attempt to authenticate using SASL (unknown
> > error)
> > 2017-07-02 13:11:30,607 INFO  [main-SendThread(Ubuntu-17:2181)]
> > zookeeper.ClientCnxn: Socket connection established to Ubuntu-17/
> > 127.0.1.1:2181, initiating session
> > 2017-07-02 13:11:30,659 INFO  [main-SendThread(Ubuntu-17:2181)]
> > zookeeper.ClientCnxn: Session establishment complete on server Ubuntu-17/
> > 127.0.1.1:2181, sessionid = 0x15d02c1a13a0006, negotiated timeout =
> 90000
> > 2017-07-02 13:11:31,183 INFO  [main] util.RegionSizeCalculator:
> > Calculating region sizes for table "resource_usage".
> > 2017-07-02 13:11:31,810 INFO  [dispatcher-event-loop-0] cluster.
> > CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor
> > NettyRpcEndpointRef(null) (192.168.88.13:51440) with ID 0
> > 2017-07-02 13:11:31,909 INFO  [dispatcher-event-loop-2] storage.
> BlockManagerMasterEndpoint:
> > Registering block manager 192.168.88.13:40963 with 366.3 MB RAM,
> > BlockManagerId(0, 192.168.88.13, 40963, None)
> > 2017-07-02 13:11:32,049 INFO  [dispatcher-event-loop-2] cluster.
> > CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor
> > NettyRpcEndpointRef(null) (192.168.88.13:51444) with ID 1
> > 2017-07-02 13:11:32,131 INFO  [dispatcher-event-loop-3] storage.
> BlockManagerMasterEndpoint:
> > Registering block manager 192.168.88.13:45349 with 366.3 MB RAM,
> > BlockManagerId(1, 192.168.88.13, 45349, None)
> > 2017-07-02 13:12:09,677 INFO  [htable-pool2-t1] client.RpcRetryingCaller:
> > Call exception, tries=10, retries=35, started=38451 ms ago,
> > cancelled=false, msg=row 'resource_usage,,00000000000000' on table
> > 'hbase:meta' at region=hbase:meta,,1.1588230740,
> hostname=ubuntu-17,16201,1498989833925,
> > seqNum=0
> > 2017-07-02 13:12:19,681 INFO  [htable-pool2-t1] client.RpcRetryingCaller:
> > Call exception, tries=11, retries=35, started=48456 ms ago,
> > cancelled=false, msg=row 'resource_usage,,00000000000000' on table
> > 'hbase:meta' at region=hbase:meta,,1.1588230740,
> hostname=ubuntu-17,16201,1498989833925,
> > seqNum=0
> > 2017-07-02 13:12:19,688 INFO  [main] client.ConnectionManager$
> HConnectionImplementation:
> > Closing zookeeper sessionid=0x15d02c1a13a0006
> > 2017-07-02 13:12:19,691 INFO  [main] zookeeper.ZooKeeper: Session:
> > 0x15d02c1a13a0006 closed
> > 2017-07-02 13:12:19,691 INFO  [main-EventThread] zookeeper.ClientCnxn:
> > EventThread shut down
> > Exception in thread "main" org.apache.hadoop.hbase.client.
> RetriesExhaustedException:
> > Failed after attempts=36, exceptions:
> > Sun Jul 02 13:12:19 EEST 2017, null, java.net.SocketTimeoutException:
> > callTimeout=60000, callDuration=68467: row 'resource_usage,,
> 00000000000000'
> > on table 'hbase:meta' at region=hbase:meta,,1.1588230740,
> > hostname=ubuntu-17,16201,1498989833925, seqNum=0
> >
> >     at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadRepli
> > cas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
> >     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(
> > ScannerCallableWithReplicas.java:210)
> >     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(
> > ScannerCallableWithReplicas.java:60)
> >     at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> > callWithoutRetries(RpcRetryingCaller.java:210)
> >     at org.apache.hadoop.hbase.client.ClientScanner.call(
> > ClientScanner.java:327)
> >     at org.apache.hadoop.hbase.client.ClientScanner.
> > nextScanner(ClientScanner.java:302)
> >     at org.apache.hadoop.hbase.client.ClientScanner.
> > initializeScannerInConstruction(ClientScanner.java:167)
> >     at org.apache.hadoop.hbase.client.ClientScanner.<init>(
> > ClientScanner.java:162)
> >     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
> >     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
> > MetaScanner.java:193)
> >     at org.apache.hadoop.hbase.client.MetaScanner.metaScan(
> > MetaScanner.java:89)
> >     at org.apache.hadoop.hbase.client.MetaScanner.
> > allTableRegions(MetaScanner.java:324)
> >     at org.apache.hadoop.hbase.client.HRegionLocator.
> > getAllRegionLocations(HRegionLocator.java:89)
> >     at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(
> > RegionSizeCalculator.java:94)
> >     at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(
> > RegionSizeCalculator.java:81)
> >     at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(
> > TableInputFormatBase.java:256)
> >     at org.apache.hadoop.hbase.mapreduce.TableInputFormat.
> > getSplits(TableInputFormat.java:239)
> >     at org.apache.spark.rdd.NewHadoopRDD.getPartitions(
> > NewHadoopRDD.scala:125)
> >     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(
> RDD.scala:252)
> >     at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(
> RDD.scala:250)
> >     at scala.Option.getOrElse(Option.scala:121)
> >     at org.apache.spark.rdd.RDD.partitions(RDD.scala:250)
> >     at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
> >     at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
> >     at SparkHbaseTest$.main(SparkHbaseTest.scala:41)
> >     at SparkHbaseTest.main(SparkHbaseTest.scala)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke(
> > NativeMethodAccessorImpl.java:62)
> >     at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> > DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:498)
> >     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> > deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
> >     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> > SparkSubmit.scala:187)
> >     at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:212)
> >     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
> >     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> > Caused by: java.net.SocketTimeoutException: callTimeout=60000,
> > callDuration=68467: row 'resource_usage,,00000000000000' on table
> > 'hbase:meta' at region=hbase:meta,,1.1588230740,
> hostname=ubuntu-17,16201,1498989833925,
> > seqNum=0
> >     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:169)
> >     at org.apache.hadoop.hbase.client.ResultBoundedCompletionService
> > $QueueingFuture.run(ResultBoundedCompletionService.java:65)
> >     at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > ThreadPoolExecutor.java:1142)
> >     at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > ThreadPoolExecutor.java:617)
> >     at java.lang.Thread.run(Thread.java:748)
> > Caused by: java.io.IOException: com.google.protobuf.ServiceException:
> > java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gauge
> >     at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(
> > ProtobufUtil.java:332)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.
> > openScanner(ScannerCallable.java:408)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.call(
> > ScannerCallable.java:204)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.call(
> > ScannerCallable.java:65)
> >     at org.apache.hadoop.hbase.client.RpcRetryingCaller.
> > callWithoutRetries(RpcRetryingCaller.java:210)
> >     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
> > RetryingRPC.call(ScannerCallableWithReplicas.java:364)
> >     at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$
> > RetryingRPC.call(ScannerCallableWithReplicas.java:338)
> >     at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(
> > RpcRetryingCaller.java:136)
> >     ... 4 more
> > Caused by: com.google.protobuf.ServiceException: java.lang.
> NoClassDefFoundError:
> > com/yammer/metrics/core/Gauge
> >     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
> > AbstractRpcClient.java:240)
> >     at org.apache.hadoop.hbase.ipc.AbstractRpcClient$
> > BlockingRpcChannelImplementation.callBlockingMethod(
> > AbstractRpcClient.java:336)
> >     at org.apache.hadoop.hbase.protobuf.generated.
> > ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
> >     at org.apache.hadoop.hbase.client.ScannerCallable.
> > openScanner(ScannerCallable.java:400)
> >     ... 10 more
> > Caused by: java.lang.NoClassDefFoundError: com/yammer/metrics/core/Gauge
> >     at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(
> > AbstractRpcClient.java:225)
> >     ... 13 more
> > Caused by: java.lang.ClassNotFoundException:
> com.yammer.metrics.core.Gauge
> >     at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >     at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >     at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
> >     at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >     ... 14 more
> > 2017-07-02 13:12:19,727 INFO  [Thread-2] spark.SparkContext: Invoking
> > stop() from shutdown hook
> > 2017-07-02 13:12:19,742 INFO  [Thread-2] server.ServerConnector: Stopped
> > Spark@668655d2{HTTP/1.1}{0.0.0.0:4040}
> > 2017-07-02 13:12:19,744 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@a77614d{/stages/stage/kill,
> > null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,744 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@4b6166aa{/jobs/job/kill,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,744 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@b91d8c4{/api,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,744 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@7807ac2c{/,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@f19c9d2{/static,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@4089713{/executors/threadDump/
> > json,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@62923ee6{/executors/
> > threadDump,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@7f811d00{/executors/json,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@7cbee484{/executors,null,
> UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@7bb3a9fe{/environment/json,
> > null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@611f8234{/environment,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@532a02d9{/storage/rdd/json,
> > null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@644c78d4{/storage/rdd,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@74bdc168{/storage/json,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@6bab2585{/storage,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,745 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@76a82f33{/stages/pool/json,
> > null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@1922e6d{/stages/pool,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@4a8ab068{/stages/stage/json,
> > null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@770d4269{/stages/stage,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@11acdc30{/stages/json,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@55f3c410{/stages,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@460f76a6{/jobs/job/json,null,
> > UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@11dee337{/jobs/job,null,
> UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@7997b197{/jobs/json,null,
> UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,746 INFO  [Thread-2] handler.ContextHandler: Stopped
> > o.s.j.s.ServletContextHandler@21c64522{/jobs,null,UNAVAILABLE,@Spark}
> > 2017-07-02 13:12:19,773 INFO  [Thread-2] ui.SparkUI: Stopped Spark web UI
> > at http://192.168.88.13:4040
> > 2017-07-02 13:12:19,788 INFO  [Thread-2] cluster.
> StandaloneSchedulerBackend:
> > Shutting down all executors
> > 2017-07-02 13:12:19,788 INFO  [dispatcher-event-loop-1] cluster.
> > CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to
> > shut down
> > 2017-07-02 13:12:19,825 INFO  [dispatcher-event-loop-0] spark.
> > MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
> > 2017-07-02 13:12:19,870 INFO  [Thread-2] memory.MemoryStore: MemoryStore
> > cleared
> > 2017-07-02 13:12:19,871 INFO  [Thread-2] storage.BlockManager:
> > BlockManager stopped
> > 2017-07-02 13:12:19,876 INFO  [Thread-2] storage.BlockManagerMaster:
> > BlockManagerMaster stopped
> > 2017-07-02 13:12:19,880 INFO  [dispatcher-event-loop-1] scheduler.
> > OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
> > OutputCommitCoordinator stopped!
> > 2017-07-02 13:12:19,900 INFO  [Thread-2] spark.SparkContext: Successfully
> > stopped SparkContext
> > 2017-07-02 13:12:19,900 INFO  [Thread-2] util.ShutdownHookManager:
> > Shutdown hook called
> > 2017-07-02 13:12:19,901 INFO  [Thread-2] util.ShutdownHookManager:
> > Deleting directory /tmp/spark-58fc2cb8-c52b-4c58-bb4b-1dde655007cc
> >
>
>  Thanks,
> George
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message