spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kiszk <...@git.apache.org>
Subject [GitHub] spark pull request #13680: [SPARK-15962][SQL] Introduce implementation with ...
Date Tue, 09 Aug 2016 11:11:41 GMT
Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13680#discussion_r74038625
  
    --- Diff: sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/util/UnsafeArraySuite.scala
---
    @@ -18,27 +18,131 @@
     package org.apache.spark.sql.catalyst.util
     
     import org.apache.spark.SparkFunSuite
    +import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder
     import org.apache.spark.sql.catalyst.expressions.UnsafeArrayData
    +import org.apache.spark.unsafe.Platform
     
     class UnsafeArraySuite extends SparkFunSuite {
     
    -  test("from primitive int array") {
    -    val array = Array(1, 10, 100)
    -    val unsafe = UnsafeArrayData.fromPrimitiveArray(array)
    -    assert(unsafe.numElements == 3)
    -    assert(unsafe.getSizeInBytes == 4 + 4 * 3 + 4 * 3)
    -    assert(unsafe.getInt(0) == 1)
    -    assert(unsafe.getInt(1) == 10)
    -    assert(unsafe.getInt(2) == 100)
    +  val booleanArray = Array(false, true)
    +  val shortArray = Array(1.toShort, 10.toShort, 100.toShort)
    +  val intArray = Array(1, 10, 100)
    +  val longArray = Array(1.toLong, 10.toLong, 100.toLong)
    +  val floatArray = Array(1.1.toFloat, 2.2.toFloat, 3.3.toFloat)
    +  val doubleArray = Array(1.1, 2.2, 3.3)
    +  val stringArray = Array("1", "10", "100")
    --- End diff --
    
    @davies thanks. I understand what I should do. While I specified the scheme as follows,
the generated code still uses 38 and 18.
    When I checked code generation, [this code](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/GenerateUnsafeProjection.scala#L300)
gets data type from serializer instead of schema. If I am correct, [this code](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala#L61)
generates a serializer in `ExpressionEncoder[T]` based on `[T]`, not a schema.
    When I replaced ['DecimalType.SYSTEM_DEFAULT'](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala#L511
    ) with 'DecimalType(4,1)', the generated code uses 4 and 1.
    
    Would it be possible to let me know how to specify specific dataType by using schema?
    
    ```
      val decimalArray = Array(BigDecimal("123").setScale(1, BigDecimal.RoundingMode.FLOOR))
      test("read array") {
        val unsafeDecimal = ExpressionEncoder[Array[BigDecimal]].copy(schema = new StructType()
            .add("value", ArrayType(DataTypes.createDecimalType(4, 1), true), true), true)
            .resolveAndBind().toRow(decimalArray).getArray(0)
        decimalArray.zipWithIndex.map { case (e, i) =>
          assert(unsafeDecimal.getDecimal(i, e.precision, e.scale) == e)
        }
      }
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message