spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject spark git commit: [SPARK-25461][PYSPARK][SQL] Add document for mismatch between return type of Pandas.Series and return type of pandas udf
Date Sun, 07 Oct 2018 15:19:31 GMT
Repository: spark
Updated Branches:
  refs/heads/master fba722e31 -> 3eb842969

[SPARK-25461][PYSPARK][SQL] Add document for mismatch between return type of Pandas.Series
and return type of pandas udf

## What changes were proposed in this pull request?

For Pandas UDFs, we get arrow type from defined Catalyst return data type of UDFs. We use
this arrow type to do serialization of data. If the defined return data type doesn't match
with actual return type of Pandas.Series returned by Pandas UDFs, it has a risk to return
incorrect data from Python side.

Currently we don't have reliable approach to check if the data conversion is safe or not.
We leave some document to notify this to users for now. When there is next upgrade of PyArrow
available we can use to check it, we should add the option to check it.

## How was this patch tested?

Only document change.

Closes #22610 from viirya/SPARK-25461.

Authored-by: Liang-Chi Hsieh <>
Signed-off-by: hyukjinkwon <>


Branch: refs/heads/master
Commit: 3eb842969906d6e81a137af6dc4339881df0a315
Parents: fba722e
Author: Liang-Chi Hsieh <>
Authored: Sun Oct 7 23:18:46 2018 +0800
Committer: hyukjinkwon <>
Committed: Sun Oct 7 23:18:46 2018 +0800

 python/pyspark/sql/ | 6 ++++++
 1 file changed, 6 insertions(+)
diff --git a/python/pyspark/sql/ b/python/pyspark/sql/
index 7685264..be089ee 100644
--- a/python/pyspark/sql/
+++ b/python/pyspark/sql/
@@ -2948,6 +2948,12 @@ def pandas_udf(f=None, returnType=None, functionType=None):
         can fail on special rows, the workaround is to incorporate the condition into the
     .. note:: The user-defined functions do not take keyword arguments on the calling side.
+    .. note:: The data type of returned `pandas.Series` from the user-defined functions should
+        matched with defined returnType (see :meth:`types.to_arrow_type` and
+        :meth:`types.from_arrow_type`). When there is mismatch between them, Spark might
+        conversion on returned data. The conversion is not guaranteed to be correct and results
+        should be checked for accuracy by users.
     # decorator @pandas_udf(returnType, functionType)
     is_decorator = f is None or isinstance(f, (str, DataType))

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message