spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dongjoon Hyun (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-22320) ORC should support VectorUDT/MatrixUDT
Date Mon, 22 Jan 2018 02:15:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-22320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333784#comment-16333784
] 

Dongjoon Hyun commented on SPARK-22320:
---------------------------------------

For this one, Parquet saves the original schema as a metadata field, `org.apache.spark.sql.parquet.row.attributes`,
at [here|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L111-L113].
{code}
    // We want to clear this temporary metadata from saving into Parquet file.
    // This metadata is only useful for detecting optional columns when pushdowning filters.
    ParquetWriteSupport.setSchema(dataSchema, conf)
{code}

For the other format like JSON/ORC, Spark doesn't have `org.apache.spark.sql.(json/orc).row.attributes`.
BTW, since the comment is wrong, we had better fix it by [PR-20346|https://github.com/apache/spark/pull/20346].

As a workaround, if a user gives a correct schema, there is no problem at reading both the
data and schema.
{code}
scala> import org.apache.spark.ml.linalg._
scala> val data = Seq((1,Vectors.dense(1.0,2.0)), (2,Vectors.sparse(8, Array(4), Array(1.0))))
data: Seq[(Int, org.apache.spark.ml.linalg.Vector)] = List((1,[1.0,2.0]), (2,(8,[4],[1.0])))

scala> val df = data.toDF("i", "vec")
scala> df.write.json("/tmp/json")

scala> spark.read.schema(df.schema).json("/tmp/json").show()
+---+-------------+
|  i|          vec|
+---+-------------+
|  2|(8,[4],[1.0])|
|  1|    [1.0,2.0]|
+---+-------------+

scala> df.write.orc("/tmp/orc")
scala> spark.read.schema(df.schema).orc("/tmp/orc").show()
+---+-------------+
|  i|          vec|
+---+-------------+
|  1|    [1.0,2.0]|
|  2|(8,[4],[1.0])|
+---+-------------+

scala> spark.read.schema(df.schema).orc("/tmp/orc").printSchema
root
 |-- i: integer (nullable = true)
 |-- vec: vector (nullable = true)
{code}

> ORC should support VectorUDT/MatrixUDT
> --------------------------------------
>
>                 Key: SPARK-22320
>                 URL: https://issues.apache.org/jira/browse/SPARK-22320
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.2, 2.1.2, 2.2.0
>            Reporter: zhengruifeng
>            Priority: Major
>
> I save dataframe containing vectors in ORC format, when I read it back, the format is
changed.
> {code}
> scala> import org.apache.spark.ml.linalg._
> import org.apache.spark.ml.linalg._
> scala> val data = Seq((1,Vectors.dense(1.0,2.0)), (2,Vectors.sparse(8, Array(4), Array(1.0))))
> data: Seq[(Int, org.apache.spark.ml.linalg.Vector)] = List((1,[1.0,2.0]), (2,(8,[4],[1.0])))
> scala> val df = data.toDF("i", "vec")
> df: org.apache.spark.sql.DataFrame = [i: int, vec: vector]
> scala> df.schema
> res0: org.apache.spark.sql.types.StructType = StructType(StructField(i,IntegerType,false),
StructField(vec,org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7,true))
> scala> df.write.orc("/tmp/123")
> scala> val df2 = spark.sqlContext.read.orc("/tmp/123")
> df2: org.apache.spark.sql.DataFrame = [i: int, vec: struct<type: tinyint, size: int
... 2 more fields>]
> scala> df2.schema
> res3: org.apache.spark.sql.types.StructType = StructType(StructField(i,IntegerType,true),
StructField(vec,StructType(StructField(type,ByteType,true), StructField(size,IntegerType,true),
StructField(indices,ArrayType(IntegerType,true),true), StructField(values,ArrayType(DoubleType,true),true)),true))
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message