spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dongjoon Hyun (JIRA)" <>
Subject [jira] [Issue Comment Deleted] (SPARK-16449) unionAll raises "Task not serializable"
Date Fri, 08 Jul 2016 19:20:11 GMT


Dongjoon Hyun updated SPARK-16449:
    Comment: was deleted

(was: Oh, I see.)

> unionAll raises "Task not serializable"
> ---------------------------------------
>                 Key: SPARK-16449
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.6.1
>         Environment: AWS EMR, Jupyter notebook
>            Reporter: Jeff Levy
>            Priority: Minor
> Goal: Take the output from `describe` on a large DataFrame, then use a loop to calculate
`skewness` and `kurtosis` from pyspark.sql.functions for each column, build them into a DataFrame
of two rows, then use `unionAll` to merge them together.
> Issue: Despite having the same column names, in the same order with the same dtypes,
the `unionAll` fails with "Task not serializable".  However, if I build two test rows using
dummy data then `unionAll` works fine.  Also, if I collect my results then turn them straight
back into DataFrames, `unionAll` succeeds.  
> Step-by-step code and output with comments can be seen here:
> The issue appears to be in the way the loop in code block 6 is building the rows before
parallelizing, but the results look no different from the test rows that do work.  I reproduced
this on multiple datasets, so downloading the notebook and pointing it to any data of your
own should replicate it.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message