spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From HyukjinKwon <...@git.apache.org>
Subject [GitHub] spark pull request #20211: [SPARK-23011][PYTHON][SQL] Prepend missing groupi...
Date Thu, 11 Jan 2018 03:58:40 GMT
Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20211#discussion_r160860320
  
    --- Diff: python/pyspark/sql/group.py ---
    @@ -233,6 +233,27 @@ def apply(self, udf):
             |  2| 1.1094003924504583|
             +---+-------------------+
     
    +        Notes on grouping column:
    --- End diff --
    
    Yup, I saw this usecase as described in the JIRA and I got that the specific case can
be simplified; however, I am not sure if it's straightforward to the end users.
    
    For example, if I use `pandas_udf` I think I would simply expect the return schema is
matched as described in `returnType`. I think `pandas_udf` already need some background and
I think we should make it simpler as possible as we can.
    
    It might be convenient to make the guarantee on grouping columns in some cases vs this
might be a kind of magic inside.
    
    I would prefer to let the UDF to specify the grouping columns to make this more straightforward
more .. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message