spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-20580) Allow RDD cache with unserializable objects
Date Mon, 08 May 2017 08:03:04 GMT

    [ https://issues.apache.org/jira/browse/SPARK-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16000411#comment-16000411
] 

Sean Owen commented on SPARK-20580:
-----------------------------------

That error isn't 'unserializable'. It's still not clear here the use case for caching unserializable
objects, nor what your code is doing.

> Allow RDD cache with unserializable objects
> -------------------------------------------
>
>                 Key: SPARK-20580
>                 URL: https://issues.apache.org/jira/browse/SPARK-20580
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.3.0
>            Reporter: Fernando Pereira
>            Priority: Minor
>
> In my current scenario we load complex Python objects in the worker nodes that are not
completely serializable. We then apply map certain operations to the RDD which at some point
we collect. In this basic usage all works well.
> However, if we cache() the RDD (which defaults to memory) suddenly it fails to execute
the transformations after the caching step. Apparently caching serializes the RDD data and
deserializes it whenever more transformations are required.
> It would be nice to avoid serialization of the objects if they are to be cached to memory,
and keep the original object



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message