Return-Path: X-Original-To: apmail-hive-dev-archive@www.apache.org Delivered-To: apmail-hive-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5ECEDC681 for ; Thu, 13 Nov 2014 14:38:34 +0000 (UTC) Received: (qmail 2644 invoked by uid 500); 13 Nov 2014 14:38:34 -0000 Delivered-To: apmail-hive-dev-archive@hive.apache.org Received: (qmail 2576 invoked by uid 500); 13 Nov 2014 14:38:33 -0000 Mailing-List: contact dev-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list dev@hive.apache.org Received: (qmail 2560 invoked by uid 500); 13 Nov 2014 14:38:33 -0000 Delivered-To: apmail-hadoop-hive-dev@hadoop.apache.org Received: (qmail 2483 invoked by uid 99); 13 Nov 2014 14:38:33 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Nov 2014 14:38:33 +0000 Date: Thu, 13 Nov 2014 14:38:33 +0000 (UTC) From: "Xuefu Zhang (JIRA)" To: hive-dev@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HIVE-8854) Guava dependency conflict between hive driver and remote spark context[Spark Branch] MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HIVE-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14209841#comment-14209841 ] Xuefu Zhang commented on HIVE-8854: ----------------------------------- Hi [~vanzin], any thought on this? Is it possible for remote sc to use a compatible version of Guava? Or, could we shade it as we did for Spark assembly? Thanks. > Guava dependency conflict between hive driver and remote spark context[Spark Branch] > ------------------------------------------------------------------------------------ > > Key: HIVE-8854 > URL: https://issues.apache.org/jira/browse/HIVE-8854 > Project: Hive > Issue Type: Sub-task > Components: Spark > Reporter: Chengxiang Li > Labels: Spark-M3 > > Hive driver would load guava 11.0.2 from hadoop/tez, while remote spark context depends on guava 14.0.1, It should be JobMetrics deserialize failed on Hive driver side since Absent is used in Metrics, here is the hive driver log: > {noformat} > java.lang.IllegalAccessError: tried to access method com.google.common.base.Optional.()V from class com.google.common.base.Absent > at com.google.common.base.Absent.(Absent.java:35) > at com.google.common.base.Absent.(Absent.java:33) > at sun.misc.Unsafe.ensureClassInitialized(Native Method) > at sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43) > at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:140) > at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1057) > at java.lang.reflect.Field.getFieldAccessor(Field.java:1038) > at java.lang.reflect.Field.getLong(Field.java:591) > at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1663) > at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72) > at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480) > at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468) > at java.security.AccessController.doPrivileged(Native Method) > at java.io.ObjectStreamClass.(ObjectStreamClass.java:468) > at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365) > at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602) > at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622) > at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) > at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990) > at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915) > at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) > at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) > at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370) > at akka.serialization.JavaSerializer$$anonfun$1.apply(Serializer.scala:136) > at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) > at akka.serialization.JavaSerializer.fromBinary(Serializer.scala:136) > at akka.serialization.Serialization$$anonfun$deserialize$1.apply(Serialization.scala:104) > at scala.util.Try$.apply(Try.scala:161) > at akka.serialization.Serialization.deserialize(Serialization.scala:98) > at akka.remote.serialization.MessageContainerSerializer.fromBinary(MessageContainerSerializer.scala:63) > at akka.serialization.Serialization$$anonfun$deserialize$1.apply(Serialization.scala:104) > at scala.util.Try$.apply(Try.scala:161) > at akka.serialization.Serialization.deserialize(Serialization.scala:98) > at akka.remote.MessageSerializer$.deserialize(MessageSerializer.scala:23) > at akka.remote.DefaultMessageDispatcher.payload$lzycompute$1(Endpoint.scala:58) > at akka.remote.DefaultMessageDispatcher.payload$1(Endpoint.scala:58) > at akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:76) > at akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:937) > at akka.actor.Actor$class.aroundReceive(Actor.scala:465) > at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:415) > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) > at akka.actor.ActorCell.invoke(ActorCell.scala:487) > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) > at akka.dispatch.Mailbox.run(Mailbox.scala:220) > at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {noformat} > and remote spark context log: > {noformat} > 2014-11-13 17:16:28,481 INFO [task-result-getter-1]: scheduler.TaskSetManager (Logging.scala:logInfo(59)) - Finished task 0.0 in stage 1.0 (TID 1) in 439 ms on node14-4 (1/1) > 2014-11-13 17:16:28,482 INFO [sparkDriver-akka.actor.default-dispatcher-8]: scheduler.DAGScheduler (Logging.scala:logInfo(59)) - Stage 1 (foreachAsync at RemoteHiveSparkClient.java:121) finished in 0.452 s > 2014-11-13 17:16:28,482 INFO [task-result-getter-1]: scheduler.TaskSchedulerImpl (Logging.scala:logInfo(59)) - Removed TaskSet 1.0, whose tasks have all completed, from pool > 2014-11-13 17:16:28,486 INFO [08592e9f-19a2-413d-bc48-c871259c4d2e-akka.actor.default-dispatcher-4]: remote.RemoteActorRefProvider$RemoteDeadLetterActorRef (Slf4jLogger.scala:apply$mcV$sp(74)) - Message [org.apache.hive.spark.client.Protocol$JobMetrics] from Actor[akka://08592e9f-19a2-413d-bc48-c871259c4d2e/user/RemoteDriver#-893697064] to Actor[akka://08592e9f-19a2-413d-bc48-c871259c4d2e/deadLetters] was not delivered. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. > 2014-11-13 17:16:28,494 INFO [08592e9f-19a2-413d-bc48-c871259c4d2e-akka.actor.default-dispatcher-4]: remote.RemoteActorRefProvider$RemoteDeadLetterActorRef (Slf4jLogger.scala:apply$mcV$sp(74)) - Message [org.apache.hive.spark.client.Protocol$JobResult] from Actor[akka://08592e9f-19a2-413d-bc48-c871259c4d2e/user/RemoteDriver#-893697064] to Actor[akka://08592e9f-19a2-413d-bc48-c871259c4d2e/deadLetters] was not delivered. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)