hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-7387) Guava version conflict between hadoop and spark [Spark-Branch]
Date Wed, 16 Jul 2014 22:01:07 GMT

    [ https://issues.apache.org/jira/browse/HIVE-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14064187#comment-14064187
] 

Sean Owen commented on HIVE-7387:
---------------------------------

Hi Xuefu, I was wrong about Spark not using Guava 12+. It does now. I posted an update on
the Spark JIRA. That makes it somewhat harder to downgrade, although not much. I would not
characterize it as not being taken seriously. There are legitimate questions here, like why
Hadoop can't get off of Guava 11, which is about 2.5 years old now. It was very helpful to
link the Spark JIRA to this one, which has the details.

> Guava version conflict between hadoop and spark [Spark-Branch]
> --------------------------------------------------------------
>
>                 Key: HIVE-7387
>                 URL: https://issues.apache.org/jira/browse/HIVE-7387
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Chengxiang Li
>            Assignee: Chengxiang Li
>
> hadoop-hdfs and hadoop-comman have dependency on guava-11.0.2.jar, and spark dependent
on guava-14.0.1.jar. guava-11.0.2 has API conflict with guava-14.0.1, as Hive CLI load both
dependency into classpath currently, query failed on either spark engine or mr engine.
> {code}
> java.lang.NoSuchMethodError: com.google.common.hash.HashFunction.hashInt(I)Lcom/google/common/hash/HashCode;
> 	at org.apache.spark.util.collection.OpenHashSet.org$apache$spark$util$collection$OpenHashSet$$hashcode(OpenHashSet.scala:261)
> 	at org.apache.spark.util.collection.OpenHashSet$mcI$sp.getPos$mcI$sp(OpenHashSet.scala:165)
> 	at org.apache.spark.util.collection.OpenHashSet$mcI$sp.contains$mcI$sp(OpenHashSet.scala:102)
> 	at org.apache.spark.util.SizeEstimator$$anonfun$visitArray$2.apply$mcVI$sp(SizeEstimator.scala:214)
> 	at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> 	at org.apache.spark.util.SizeEstimator$.visitArray(SizeEstimator.scala:210)
> 	at org.apache.spark.util.SizeEstimator$.visitSingleObject(SizeEstimator.scala:169)
> 	at org.apache.spark.util.SizeEstimator$.org$apache$spark$util$SizeEstimator$$estimate(SizeEstimator.scala:161)
> 	at org.apache.spark.util.SizeEstimator$.estimate(SizeEstimator.scala:155)
> 	at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:75)
> 	at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92)
> 	at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:661)
> 	at org.apache.spark.storage.BlockManager.put(BlockManager.scala:546)
> 	at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:812)
> 	at org.apache.spark.broadcast.HttpBroadcast.<init>(HttpBroadcast.scala:52)
> 	at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35)
> 	at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29)
> 	at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
> 	at org.apache.spark.SparkContext.broadcast(SparkContext.scala:776)
> 	at org.apache.spark.rdd.HadoopRDD.<init>(HadoopRDD.scala:112)
> 	at org.apache.spark.SparkContext.hadoopRDD(SparkContext.scala:527)
> 	at org.apache.spark.api.java.JavaSparkContext.hadoopRDD(JavaSparkContext.scala:307)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkClient.createRDD(SparkClient.java:204)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkClient.execute(SparkClient.java:167)
> 	at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:32)
> 	at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:159)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
> 	at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:72)
> {code}
> NO PRECOMMIT TESTS. This is for spark branch only.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message