Hi,

   I am trying Spark 2.0. I downloaded a prebuilt version spark-2.0.0-preview-bin-hadoop2.7.tgz for trial and installed it on my testing cluster. I had HDFS, YARN and Hive metastore service in position. When I started the thrift server, it started as expected. When I tried to connect thrift server through beeline, I got excetion from both the beline side and the thrift server side. By the way, I also tried Spark 1.6.1 and there was no exception with the same configuration. Can anybody help me solve this problem?

   For the beelinde side, I got the following exception:

   Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

    at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:379)

    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:230)

    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)

    at org.apache.hive.service.cli.thrift.TCLIService$Client.recv_OpenSession(TCLIService.java:156)

    at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:143)

    at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:583)

    at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:192)

    at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)

    at java.sql.DriverManager.getConnection(DriverManager.java:571)

    at java.sql.DriverManager.getConnection(DriverManager.java:187)

    at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142)

    at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207)

    at org.apache.hive.beeline.Commands.close(Commands.java:987)

    at org.apache.hive.beeline.Commands.closeall(Commands.java:969)

    at org.apache.hive.beeline.BeeLine.close(BeeLine.java:826)

   For the thrift server side, I got the following exception:

   WARN netty.NettyRpcEndpointRef: Error sending message [message = RequestExecutors(0,0,Map())] in 1 attempts

org.apache.spark.SparkException: Exception thrown in awaitResult

        at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)

        at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)

        at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)

        at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)

        at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)

        at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)

        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)

        at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)

        at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)

        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply$mcV$sp(YarnSchedulerBackend.scala:271)

        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(YarnSchedulerBackend.scala:271)

        at org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$1.apply(YarnSchedulerBackend.scala:271)

        at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)

        at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

        at java.lang.Thread.run(Thread.java:745)

       Caused by: java.io.IOException: Failed to send RPC 8568677726416939006 to ludp02.lenovo.com/10.100.6.16:36017: java.nio.channels.ClosedChannelException

        at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)

        at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)

        at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)

        at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)

        at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)

        at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)

        at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)

        at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)

        at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)

        at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)

        at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)

        at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)

        at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)

        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)

        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)

        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)

        ... 1 more

Caused by: java.nio.channels.ClosedChannelException

 

谷磊 Jason Koo
Big Data Product
LCIG
No.6, Shang Di West Road, Haidian District, Beijing, P.R.China 

 
gulei2@lenovo.com
Ph: 56721523 
Mobile: 18101021523 

说明: Lenovo_2015

www.lenovo.com.cn