spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Cheng Lian <lian.cs....@gmail.com>
Subject Re: SparkSQL: map type MatchError when inserting into Hive table
Date Sat, 27 Sep 2014 03:08:24 GMT
Would you mind to provide the DDL of this partitioned table together 
with the query you tried? The stacktrace suggests that the query was 
trying to cast a map into something else, which is not supported in 
Spark SQL. And I doubt whether Hive support casting a complex type to 
some other type.

On 9/27/14 7:48 AM, Du Li wrote:
> Hi,
>
> I was loading data into a partitioned table on Spark 1.1.0
> beeline-thriftserver. The table has complex data types such as map<string,
> string> and array<map<string,string>>. The query is like ³insert overwrite
> table a partition (Š) select Š² and the select clause worked if run
> separately. However, when running the insert query, there was an error as
> follows.
>
> The source code of Cast.scala seems to only handle the primitive data
> types, which is perhaps why the MatchError was thrown.
>
> I just wonder if this is still work in progress, or I should do it
> differently.
>
> Thanks,
> Du
>
>
> ----
> scala.MatchError: MapType(StringType,StringType,true) (of class
> org.apache.spark.sql.catalyst.types.MapType)
>
> org.apache.spark.sql.catalyst.expressions.Cast.cast$lzycompute(Cast.scala:2
> 47)
>          org.apache.spark.sql.catalyst.expressions.Cast.cast(Cast.scala:247)
>          org.apache.spark.sql.catalyst.expressions.Cast.eval(Cast.scala:263)
>
> org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala
> :84)
>
> org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.appl
> y(Projection.scala:66)
>
> org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.appl
> y(Projection.scala:50)
>          scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>          scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable.org$apache$spark$sq
> l$hive$execution$InsertIntoHiveTable$$writeToFile$1(InsertIntoHiveTable.sca
> la:149)
>
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHive
> File$1.apply(InsertIntoHiveTable.scala:158)
>
> org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHive
> File$1.apply(InsertIntoHiveTable.scala:158)
>          org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>          org.apache.spark.scheduler.Task.run(Task.scala:54)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1
> 145)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:
> 615)
>          java.lang.Thread.run(Thread.java:722)
>
>
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org


Mime
View raw message