spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hyukjin Kwon (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-23314) Pandas grouped udf on dataset with timestamp column error
Date Sun, 11 Feb 2018 08:33:00 GMT

     [ https://issues.apache.org/jira/browse/SPARK-23314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Hyukjin Kwon updated SPARK-23314:
---------------------------------
    Fix Version/s:     (was: 2.3.0)
                   2.3.1

> Pandas grouped udf on dataset with timestamp column error 
> ----------------------------------------------------------
>
>                 Key: SPARK-23314
>                 URL: https://issues.apache.org/jira/browse/SPARK-23314
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark
>    Affects Versions: 2.3.0
>            Reporter: Felix Cheung
>            Assignee: Li Jin
>            Priority: Major
>             Fix For: 2.3.1
>
>
> Under  SPARK-22216
> When testing pandas_udf on group bys, I saw this error with the timestamp column.
> File "pandas/_libs/tslib.pyx", line 3593, in pandas._libs.tslib.tz_localize_to_utc
> AmbiguousTimeError: Cannot infer dst time from Timestamp('2015-11-01 01:29:30'), try
using the 'ambiguous' argument
> For details, see Comment box. I'm able to reproduce this on the latest branch-2.3 (last
change from Feb 1 UTC)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message