spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-10942) Not all cached RDDs are unpersisted
Date Wed, 07 Oct 2015 20:38:26 GMT

    [ https://issues.apache.org/jira/browse/SPARK-10942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14947533#comment-14947533
] 

Sean Owen commented on SPARK-10942:
-----------------------------------

I tried this on master in spark-shell:

{code}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import scala.collection.mutable

val ssc = new StreamingContext(sc, Seconds(1))

val inputRDDs = mutable.Queue.tabulate(30) { i =>
  sc.parallelize(Seq(i))
}

val input = ssc.queueStream(inputRDDs)

val output = input.transform { rdd =>
  if (rdd.isEmpty()) {
    rdd
  } else {
    val rdd2 = rdd.map(identity)
    rdd2.cache()
    rdd2.setName(rdd.first().toString)
    val rdd3 = rdd2.map(identity) ++ rdd2.map(identity)
    rdd3
  }
}

output.print()
ssc.start()
{code}

I see nothing in the Storage tab after a short time, like ~30 seconds. The RDDs were cached
but are then unpersisted.

> Not all cached RDDs are unpersisted
> -----------------------------------
>
>                 Key: SPARK-10942
>                 URL: https://issues.apache.org/jira/browse/SPARK-10942
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>            Reporter: Nick Pritchard
>            Priority: Minor
>         Attachments: SPARK-10942_1.png, SPARK-10942_2.png, SPARK-10942_3.png
>
>
> I have a Spark Streaming application that caches RDDs inside of a {{transform}} closure.
Looking at the Spark UI, it seems that most of these RDDs are unpersisted after the batch
completes, but not all.
> I have copied a minimal reproducible example below to highlight the problem. I run this
and monitor the Spark UI "Storage" tab. The example generates and caches 30 RDDs, and I see
most get cleaned up. However in the end, some still remain cached. There is some randomness
going on because I see different RDDs remain cached for each run.
> I have marked this as Major because I haven't been able to workaround it and it is a
memory leak for my application. I tried setting {{spark.cleaner.ttl}} but that did not change
anything.
> {code}
> val inputRDDs = mutable.Queue.tabulate(30) { i =>
>   sc.parallelize(Seq(i))
> }
> val input: DStream[Int] = ssc.queueStream(inputRDDs)
> val output = input.transform { rdd =>
>   if (rdd.isEmpty()) {
>     rdd
>   } else {
>     val rdd2 = rdd.map(identity)
>     rdd2.setName(rdd.first().toString)
>     rdd2.cache()
>     val rdd3 = rdd2.map(identity)
>     rdd3
>   }
> }
> output.print()
> ssc.start()
> ssc.awaitTermination()
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message