spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <>
Subject [jira] [Commented] (SPARK-17608) Long type has incorrect serialization/deserialization
Date Fri, 14 Apr 2017 17:50:41 GMT


Apache Spark commented on SPARK-17608:

User 'wangmiao1981' has created a pull request for this issue:

> Long type has incorrect serialization/deserialization
> -----------------------------------------------------
>                 Key: SPARK-17608
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>    Affects Versions: 2.0.0
>            Reporter: Thomas Powell
> Am hitting issues when using {{dapply}} on a data frame that contains a {{bigint}} in
its schema. When this is converted to a SparkR data frame a "bigint" gets converted to a R
{{numeric}} type:
> However, the R {{numeric}} type gets converted to {{org.apache.spark.sql.types.DoubleType}}:
> The two directions therefore aren't compatible. If I use the same schema when using dapply
(and just an identity function) I will get type collisions because the output type is a double
but the schema expects a bigint. 

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message