spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hyukjin Kwon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-24427) Spark 2.2 - Exception occurred while saving table in spark. Multiple sources found for parquet
Date Thu, 31 May 2018 02:08:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-24427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495982#comment-16495982
] 

Hyukjin Kwon commented on SPARK-24427:
--------------------------------------

Doesn't it sound you specified multiple versions of Spark in your classpath?

>  Spark 2.2 - Exception occurred while saving table in spark. Multiple sources found for
parquet 
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-24427
>                 URL: https://issues.apache.org/jira/browse/SPARK-24427
>             Project: Spark
>          Issue Type: Bug
>          Components: Java API
>    Affects Versions: 2.2.0
>            Reporter: Ashok Rai
>            Priority: Major
>
> We are getting below error while loading into Hive table. In our code, we use "saveAsTable"
- which as per documentation automatically chooses the format that the table was created on.
We have now tested by creating the table as Parquet as well as ORC. In both cases - the same
error occurred.
>  
> -----------------------------------------------------------------------------------------------------------------
> 2018-05-29 12:25:07,433 ERROR [main] ERROR - Exception occurred while saving table in
spark.
>  org.apache.spark.sql.AnalysisException: Multiple sources found for parquet (org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat,
org.apache.spark.sql.execution.datasources.parquet.DefaultSource), please specify the fully
qualified class name.;
>  at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:584)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.datasources.PreprocessTableCreation$$anonfun$apply$2.applyOrElse(rules.scala:111)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.datasources.PreprocessTableCreation$$anonfun$apply$2.applyOrElse(rules.scala:75)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266) ~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256) ~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.datasources.PreprocessTableCreation.apply(rules.scala:75)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.datasources.PreprocessTableCreation.apply(rules.scala:71)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) ~[scala-library-2.11.8.jar:?]
>  at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
~[scala-library-2.11.8.jar:?]
>  at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) ~[scala-library-2.11.8.jar:?]
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at scala.collection.immutable.List.foreach(List.scala:381) ~[scala-library-2.11.8.jar:?]
>  at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74) ~[spark-catalyst_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67) ~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.withCachedData$lzycompute(QueryExecution.scala:73)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.withCachedData(QueryExecution.scala:72)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:78)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:78)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
~[spark-sql_2.11-2.2.0.2.6.4.25-1.jar:2.2.0.2.6.4.25-1]
>  ------------------------------------------------
>  Regards,
>  Ashok



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message