Return-Path: X-Original-To: apmail-spark-dev-archive@minotaur.apache.org Delivered-To: apmail-spark-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id AE5F5100BA for ; Thu, 13 Mar 2014 00:58:53 +0000 (UTC) Received: (qmail 3386 invoked by uid 500); 13 Mar 2014 00:58:52 -0000 Delivered-To: apmail-spark-dev-archive@spark.apache.org Received: (qmail 3281 invoked by uid 500); 13 Mar 2014 00:58:52 -0000 Mailing-List: contact dev-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@spark.apache.org Delivered-To: mailing list dev@spark.apache.org Received: (qmail 3273 invoked by uid 99); 13 Mar 2014 00:58:52 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 13 Mar 2014 00:58:52 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id 029739434AD; Thu, 13 Mar 2014 00:58:51 +0000 (UTC) From: aarondav To: dev@spark.apache.org Reply-To: dev@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [SPARK-1103] [WIP] Automatic garbage collectio... Content-Type: text/plain Message-Id: <20140313005852.029739434AD@tyr.zones.apache.org> Date: Thu, 13 Mar 2014 00:58:51 +0000 (UTC) Github user aarondav commented on a diff in the pull request: https://github.com/apache/spark/pull/126#discussion_r10549336 --- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala --- @@ -0,0 +1,126 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark + +import scala.collection.mutable.{ArrayBuffer, SynchronizedBuffer} + +import java.util.concurrent.{LinkedBlockingQueue, TimeUnit} + +import org.apache.spark.rdd.RDD + +/** Listener class used for testing when any item has been cleaned by the Cleaner class */ +private[spark] trait CleanerListener { + def rddCleaned(rddId: Int) + def shuffleCleaned(shuffleId: Int) +} + +/** + * Cleans RDDs and shuffle data. + */ +private[spark] class ContextCleaner(env: SparkEnv) extends Logging { + + /** Classes to represent cleaning tasks */ + private sealed trait CleaningTask + private case class CleanRDD(sc: SparkContext, id: Int) extends CleaningTask + private case class CleanShuffle(id: Int) extends CleaningTask + // TODO: add CleanBroadcast + + private val queue = new LinkedBlockingQueue[CleaningTask] + + protected val listeners = new ArrayBuffer[CleanerListener] + with SynchronizedBuffer[CleanerListener] + + private val cleaningThread = new Thread() { override def run() { keepCleaning() }} + + private var stopped = false + + /** Start the cleaner */ + def start() { + cleaningThread.setDaemon(true) + cleaningThread.start() + } + + /** Stop the cleaner */ + def stop() { + synchronized { stopped = true } + cleaningThread.interrupt() + } + + /** Clean (unpersist) RDD data. */ + def cleanRDD(rdd: RDD[_]) { --- End diff -- This may be paranoia speaking, but it would seem cleaner to me to pass in only the fields of RDD that you need here, because calling cleanRDD(this) in a finalizer seems sketchy. It's clearly not a problem right now, but just for future modifiers... --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. ---