spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yin Huai (JIRA)" <j...@apache.org>
Subject [jira] [Issue Comment Deleted] (SPARK-8020) Spark SQL in spark-defaults.conf make metadataHive get constructed too early
Date Tue, 02 Jun 2015 04:03:19 GMT

     [ https://issues.apache.org/jira/browse/SPARK-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yin Huai updated SPARK-8020:
----------------------------
    Comment: was deleted

(was: Is guava causing the problem?)

> Spark SQL in spark-defaults.conf make metadataHive get constructed too early
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-8020
>                 URL: https://issues.apache.org/jira/browse/SPARK-8020
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.0
>            Reporter: Yin Huai
>            Assignee: Yin Huai
>            Priority: Critical
>
> To correctly construct a {{metadataHive}} object, we need two settings, {{spark.sql.hive.metastore.version}}
and {{spark.sql.hive.metastore.jars}}. If users want to use Hive 0.12's metastore, they need
to set {{spark.sql.hive.metastore.version}} to {{0.12.0}} and set {{spark.sql.hive.metastore.jars}}
to {{maven}} or a classpath containing Hive and Hadoop's jars. However, any spark sql setting
in the {{spark-defaults.conf}} will trigger the construction of {{metadataHive}} and cause
Spark SQL connect to the wrong metastore (e.g. connect to the local derby metastore instead
of a remove mysql Hive 0.12 metastore). Also, if {{spark.sql.hive.metastore.version 0.12.0}}
is the first conf set to SQL conf, we will get
> {code}
> Exception in thread "main" java.lang.IllegalArgumentException: Builtin jars can only
be used when hive execution version == hive metastore version. Execution: 0.13.1 != Metastore:
0.12.0. Specify a vaild path to the correct hive jars using $HIVE_METASTORE_JARS or change
spark.sql.hive.metastore.version to 0.13.1.
> 	at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:186)
> 	at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
> 	at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:358)
> 	at org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:186)
> 	at org.apache.spark.sql.SQLContext$$anonfun$3.apply(SQLContext.scala:185)
> 	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> 	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> 	at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:185)
> 	at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:71)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:53)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:248)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:136)
> 	at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
> 	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
> 	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message