Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 0CF8A200B39 for ; Sat, 9 Jul 2016 19:44:13 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 0B938160A81; Sat, 9 Jul 2016 17:44:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2C23A160A59 for ; Sat, 9 Jul 2016 19:44:12 +0200 (CEST) Received: (qmail 28891 invoked by uid 500); 9 Jul 2016 17:44:11 -0000 Mailing-List: contact issues-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list issues@ambari.apache.org Received: (qmail 28879 invoked by uid 99); 9 Jul 2016 17:44:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Jul 2016 17:44:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 3B4042C027F for ; Sat, 9 Jul 2016 17:44:11 +0000 (UTC) Date: Sat, 9 Jul 2016 17:44:11 +0000 (UTC) From: "Renjith Kamath (JIRA)" To: issues@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (AMBARI-17639) Spark Interpreter fails with "HiveException: org.apache.thrift.transport.TTransportException" MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Sat, 09 Jul 2016 17:44:13 -0000 [ https://issues.apache.org/jira/browse/AMBARI-17639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Renjith Kamath updated AMBARI-17639: ------------------------------------ Status: Patch Available (was: Open) > Spark Interpreter fails with "HiveException: org.apache.thrift.transport.TTransportException" > --------------------------------------------------------------------------------------------- > > Key: AMBARI-17639 > URL: https://issues.apache.org/jira/browse/AMBARI-17639 > Project: Ambari > Issue Type: Bug > Affects Versions: 2.4.0 > Reporter: Yesha Vora > Assignee: Renjith Kamath > Fix For: 2.4.0 > > > Scenario: > * Create a new notebook > * Run below paragraph > {code} > %sh > hdfs dfs -copyFromLocal /etc/hadoop//conf/core-site.xml /tmp{code} > {code} > %spark > val file = sc.textFile("/tmp/core-site.xml") > val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _) > counts.saveAsTextFile("/tmp/wordcount1"){code} > {code:title=output from zeppelin notebook} > org.apache.thrift.transport.TTransportException > at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) > at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) > at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) > at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) > at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) > at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69) > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_delegation_token(ThriftHiveMetastore.java:3715) > at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_delegation_token(ThriftHiveMetastore.java:3701) > at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDelegationToken(HiveMetaStoreClient.java:1796) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156) > at com.sun.proxy.$Proxy29.getDelegationToken(Unknown Source) > at org.apache.hadoop.hive.ql.metadata.Hive.getDelegationToken(Hive.java:3150) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anonfun$obtainTokenForHiveMetastoreInner$4.apply(YarnSparkHadoopUtil.scala:251) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anonfun$obtainTokenForHiveMetastoreInner$4.apply(YarnSparkHadoopUtil.scala:249) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil$$anon$1.run(YarnSparkHadoopUtil.scala:340) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.doAsRealUser(YarnSparkHadoopUtil.scala:339) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokenForHiveMetastoreInner(YarnSparkHadoopUtil.scala:249) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokenForHiveMetastore(YarnSparkHadoopUtil.scala:204) > at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.obtainTokenForHiveMetastore(YarnSparkHadoopUtil.scala:151) > at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:348) > at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:733) > at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:143) > at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56) > at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144) > at org.apache.spark.SparkContext.(SparkContext.scala:530) > at org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:338) > at org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:122) > at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:513) > at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:69) > at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93) > at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341) > at org.apache.zeppelin.scheduler.Job.run(Job.java:176) > at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {code} > The same spark wordcount example works fine directly using spark-shell. It fails only via Zeppelin. -- This message was sent by Atlassian JIRA (v6.3.4#6332)