spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kiszk <...@git.apache.org>
Subject [GitHub] spark pull request #18626: [BACKPORT-2.1][SPARK-19104][SQL] Lambda variables...
Date Thu, 13 Jul 2017 16:16:34 GMT
GitHub user kiszk opened a pull request:

    https://github.com/apache/spark/pull/18626

    [BACKPORT-2.1][SPARK-19104][SQL] Lambda variables in ExternalMapToCatalyst should be global

    ## What changes were proposed in this pull request?
    
    This PR is backport of #18418 to Spark 2.1
    
    The issue happens in `ExternalMapToCatalyst`. For example, the following codes create
`ExternalMapToCatalyst` to convert Scala Map to catalyst map format.
    
    ```
    val data = Seq.tabulate(10)(i => NestedData(1, Map("key" -> InnerData("name", i
+ 100))))
    val ds = spark.createDataset(data)
    ```
    The `valueConverter` in `ExternalMapToCatalyst` looks like:
    
    ```
    if (isnull(lambdavariable(ExternalMapToCatalyst_value52, ExternalMapToCatalyst_value_isNull52,
ObjectType(class org.apache.spark.sql.InnerData), true))) null else named_struct(name, staticinvoke(class
org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(lambdavariable(ExternalMapToCatalyst_value52,
ExternalMapToCatalyst_value_isNull52, ObjectType(class org.apache.spark.sql.InnerData), true)).name,
true), value, assertnotnull(lambdavariable(ExternalMapToCatalyst_value52, ExternalMapToCatalyst_value_isNull52,
ObjectType(class org.apache.spark.sql.InnerData), true)).value)
    ```
    There is a `CreateNamedStruct` expression (`named_struct`) to create a row of `InnerData.name`
and `InnerData.value` that are referred by `ExternalMapToCatalyst_value52`.
    
    Because `ExternalMapToCatalyst_value52` are local variable, when `CreateNamedStruct` splits
expressions to individual functions, the local variable can't be accessed anymore.
    
    ## How was this patch tested?
    
    Added a new test suite into `DatasetPrimitiveSuite`


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/kiszk/spark branch-2.1

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/18626.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #18626
    
----
commit 5c9319727fe60e567e28870d2fa47a9cc6578199
Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
Date:   2017-04-02T07:07:19Z

    backport #17472

commit a6985cffc89ad3b4c47f5ff5720e9d64ea173137
Author: Kazuaki Ishizaki <ishizaki@jp.ibm.com>
Date:   2017-07-13T16:11:04Z

    initial commit

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message