spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nick Pritchard (JIRA)" <>
Subject [jira] [Created] (SPARK-10942) Not all cached RDDs are unpersisted
Date Tue, 06 Oct 2015 02:49:26 GMT
Nick Pritchard created SPARK-10942:

             Summary: Not all cached RDDs are unpersisted
                 Key: SPARK-10942
             Project: Spark
          Issue Type: Bug
          Components: Streaming
            Reporter: Nick Pritchard

I have a Spark Streaming application that caches RDDs inside of a {{transform}} closure. Looking
at the Spark UI, it seems that most of these RDDs are unpersisted after the batch completes,
but not all.

I have copied a minimal reproducible example below to highlight the problem. I run this and
monitor the Spark UI "Storage" tab. The example generates and caches 30 RDDs, and I see most
get cleaned up. However in the end, some still remain cached. There is some randomness going
on because I see different RDDs remain cached for each run.

I have marked this as Major because I haven't been able to workaround it and it is a memory
leak for my application. I tried setting {{spark.cleaner.ttl}} but that did not change anything.

val inputRDDs = mutable.Queue.tabulate(30) { i =>
val input: DStream[Int] = ssc.queueStream(inputRDDs)

val output = input.transform { rdd =>
  if (rdd.isEmpty()) {
  } else {
    val rdd2 =
    val rdd3 =


This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message