spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hossein Falaki (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-15516) Schema merging in driver fails for parquet when merging LongType and IntegerType
Date Thu, 23 Jun 2016 23:56:16 GMT

    [ https://issues.apache.org/jira/browse/SPARK-15516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347454#comment-15347454
] 

Hossein Falaki commented on SPARK-15516:
----------------------------------------

Here is an example:
{code}
create table mytable using parquet options (path "/logs/parquet-files", mergeSchema "true")
{code}

> Schema merging in driver fails for parquet when merging LongType and IntegerType
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-15516
>                 URL: https://issues.apache.org/jira/browse/SPARK-15516
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>         Environment: Databricks
>            Reporter: Hossein Falaki
>
> I tried to create a table from partitioned parquet directories that requires schema merging.
I get following error:
> {code}
> at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$24$$anonfun$apply$9.apply(ParquetRelation.scala:831)
>     at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$24$$anonfun$apply$9.apply(ParquetRelation.scala:826)
>     at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>     at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$24.apply(ParquetRelation.scala:826)
>     at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$24.apply(ParquetRelation.scala:801)
>     at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:756)
>     at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$22.apply(RDD.scala:756)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:282)
>     at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
>     at org.apache.spark.scheduler.Task.run(Task.scala:85)
>     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.SparkException: Failed to merge incompatible data types LongType
and IntegerType
>     at org.apache.spark.sql.types.StructType$.merge(StructType.scala:462)
>     at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:420)
>     at org.apache.spark.sql.types.StructType$$anonfun$merge$1$$anonfun$apply$3.apply(StructType.scala:418)
>     at scala.Option.map(Option.scala:145)
>     at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:418)
>     at org.apache.spark.sql.types.StructType$$anonfun$merge$1.apply(StructType.scala:415)
>     at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>     at org.apache.spark.sql.types.StructType$.merge(StructType.scala:415)
>     at org.apache.spark.sql.types.StructType.merge(StructType.scala:333)
>     at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anonfun$24$$anonfun$apply$9.apply(ParquetRelation.scala:829)
> {code}
> cc @rxin and [~mengxr]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message