spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From HyukjinKwon <>
Subject [GitHub] spark pull request #20211: [SPARK-23011][PYTHON][SQL] Prepend missing groupi...
Date Thu, 11 Jan 2018 03:58:40 GMT
Github user HyukjinKwon commented on a diff in the pull request:
    --- Diff: python/pyspark/sql/ ---
    @@ -233,6 +233,27 @@ def apply(self, udf):
             |  2| 1.1094003924504583|
    +        Notes on grouping column:
    --- End diff --
    Yup, I saw this usecase as described in the JIRA and I got that the specific case can
be simplified; however, I am not sure if it's straightforward to the end users.
    For example, if I use `pandas_udf` I think I would simply expect the return schema is
matched as described in `returnType`. I think `pandas_udf` already need some background and
I think we should make it simpler as possible as we can.
    It might be convenient to make the guarantee on grouping columns in some cases vs this
might be a kind of magic inside.
    I would prefer to let the UDF to specify the grouping columns to make this more straightforward
more .. 


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message