spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Rosen (JIRA)" <>
Subject [jira] [Updated] (SPARK-3731) RDD caching stops working in pyspark after some time
Date Tue, 07 Oct 2014 19:20:34 GMT


Josh Rosen updated SPARK-3731:
    Affects Version/s: 1.0.2

> RDD caching stops working in pyspark after some time
> ----------------------------------------------------
>                 Key: SPARK-3731
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, Spark Core
>    Affects Versions: 1.0.2, 1.1.0, 1.2.0
>         Environment: Linux, 32bit, both in local mode or in standalone cluster mode
>            Reporter: Milan Straka
>            Assignee: Davies Liu
>            Priority: Critical
>         Attachments: spark-3731.log,, spark-3731.txt.bz2, worker.log
> Consider a file F which when loaded with sc.textFile and cached takes up slightly more
than half of free memory for RDD cache.
> When in PySpark the following is executed:
>   1) a = sc.textFile(F)
>   2) a.cache().count()
>   3) b = sc.textFile(F)
>   4) b.cache().count()
> and then the following is repeated (for example 10 times):
>   a) a.unpersist().cache().count()
>   b) b.unpersist().cache().count()
> after some time, there are no RDD cached in memory.
> Also, since that time, no other RDD ever gets cached (the worker always reports something
like "WARN CacheManager: Not enough space to cache partition rdd_23_5 in memory! Free memory
is 277478190 bytes.", even if rdd_23_5 is ~50MB). The Executors tab of the Application Detail
UI shows that all executors have 0MB memory used (which is consistent with the CacheManager
> When doing the same in scala, everything works perfectly.
> I understand that this is a vague description, but I do no know how to describe the problem

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message