spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From HyukjinKwon <...@git.apache.org>
Subject [GitHub] spark pull request #21060: [SPARK-23942][PYTHON][SQL][BRANCH-2.3] Makes coll...
Date Fri, 13 Apr 2018 03:39:39 GMT
GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/21060

    [SPARK-23942][PYTHON][SQL][BRANCH-2.3] Makes collect in PySpark as action for a query
executor listener

    ## What changes were proposed in this pull request?
    
    This PR proposes to add `collect` to  a query executor as an action.
    
    Seems `collect` / `collect` with Arrow are not recognised via `QueryExecutionListener`
as an action. For example, if we have a custom listener as below:
    
    ```scala
    package org.apache.spark.sql
    
    import org.apache.spark.internal.Logging
    import org.apache.spark.sql.execution.QueryExecution
    import org.apache.spark.sql.util.QueryExecutionListener
    
    class TestQueryExecutionListener extends QueryExecutionListener with Logging {
      override def onSuccess(funcName: String, qe: QueryExecution, durationNs: Long): Unit
= {
        logError("Look at me! I'm 'onSuccess'")
      }
    
      override def onFailure(funcName: String, qe: QueryExecution, exception: Exception):
Unit = { }
    }
    ```
    and set `spark.sql.queryExecutionListeners` to `org.apache.spark.sql.TestQueryExecutionListener`
    
    Other operations in PySpark or Scala side seems fine:
    
    ```python
    >>> sql("SELECT * FROM range(1)").show()
    ```
    ```
    18/04/09 17:02:04 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
    +---+
    | id|
    +---+
    |  0|
    +---+
    ```
    
    ```scala
    scala> sql("SELECT * FROM range(1)").collect()
    ```
    ```
    18/04/09 16:58:41 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
    res1: Array[org.apache.spark.sql.Row] = Array([0])
    ```
    
    but ..
    
    **Before**
    
    ```python
    >>> sql("SELECT * FROM range(1)").collect()
    ```
    ```
    [Row(id=0)]
    ```
    
    ```python
    >>> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
    >>> sql("SELECT * FROM range(1)").toPandas()
    ```
    ```
       id
    0   0
    ```
    
    **After**
    
    ```python
    >>> sql("SELECT * FROM range(1)").collect()
    ```
    ```
    18/04/09 16:57:58 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
    [Row(id=0)]
    ```
    
    ```python
    >>> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
    >>> sql("SELECT * FROM range(1)").toPandas()
    ```
    ```
    18/04/09 17:53:26 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
       id
    0   0
    ```
    
    ## How was this patch tested?
    
    I have manually tested as described above and unit test was added.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark PR_TOOL_PICK_PR_21007_BRANCH-2.3

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/21060.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #21060
    
----
commit 4656724d27c208d794f99691cfbf93b4bb118d93
Author: hyukjinkwon <gurwls223@...>
Date:   2018-04-13T03:28:13Z

    [SPARK-23942][PYTHON][SQL] Makes collect in PySpark as action for a query executor listener
    
    This PR proposes to add `collect` to  a query executor as an action.
    
    Seems `collect` / `collect` with Arrow are not recognised via `QueryExecutionListener`
as an action. For example, if we have a custom listener as below:
    
    ```scala
    package org.apache.spark.sql
    
    import org.apache.spark.internal.Logging
    import org.apache.spark.sql.execution.QueryExecution
    import org.apache.spark.sql.util.QueryExecutionListener
    
    class TestQueryExecutionListener extends QueryExecutionListener with Logging {
      override def onSuccess(funcName: String, qe: QueryExecution, durationNs: Long): Unit
= {
        logError("Look at me! I'm 'onSuccess'")
      }
    
      override def onFailure(funcName: String, qe: QueryExecution, exception: Exception):
Unit = { }
    }
    ```
    and set `spark.sql.queryExecutionListeners` to `org.apache.spark.sql.TestQueryExecutionListener`
    
    Other operations in PySpark or Scala side seems fine:
    
    ```python
    >>> sql("SELECT * FROM range(1)").show()
    ```
    ```
    18/04/09 17:02:04 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
    +---+
    | id|
    +---+
    |  0|
    +---+
    ```
    
    ```scala
    scala> sql("SELECT * FROM range(1)").collect()
    ```
    ```
    18/04/09 16:58:41 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
    res1: Array[org.apache.spark.sql.Row] = Array([0])
    ```
    
    but ..
    
    **Before**
    
    ```python
    >>> sql("SELECT * FROM range(1)").collect()
    ```
    ```
    [Row(id=0)]
    ```
    
    ```python
    >>> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
    >>> sql("SELECT * FROM range(1)").toPandas()
    ```
    ```
       id
    0   0
    ```
    
    **After**
    
    ```python
    >>> sql("SELECT * FROM range(1)").collect()
    ```
    ```
    18/04/09 16:57:58 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
    [Row(id=0)]
    ```
    
    ```python
    >>> spark.conf.set("spark.sql.execution.arrow.enabled", "true")
    >>> sql("SELECT * FROM range(1)").toPandas()
    ```
    ```
    18/04/09 17:53:26 ERROR TestQueryExecutionListener: Look at me! I'm 'onSuccess'
       id
    0   0
    ```
    
    I have manually tested as described above and unit test was added.
    
    Author: hyukjinkwon <gurwls223@apache.org>
    
    Closes #21007 from HyukjinKwon/SPARK-23942.
    
    (cherry picked from commit ab7b961a4fe96ca02b8352d16b0fa80c972b67fc)
    Signed-off-by: hyukjinkwon <gurwls223@apache.org>

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message