Return-Path: X-Original-To: apmail-spark-reviews-archive@minotaur.apache.org Delivered-To: apmail-spark-reviews-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9209111899 for ; Wed, 27 Aug 2014 03:23:57 +0000 (UTC) Received: (qmail 49247 invoked by uid 500); 27 Aug 2014 03:23:57 -0000 Delivered-To: apmail-spark-reviews-archive@spark.apache.org Received: (qmail 49217 invoked by uid 500); 27 Aug 2014 03:23:57 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 49204 invoked by uid 99); 27 Aug 2014 03:23:57 -0000 Received: from tyr.zones.apache.org (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Aug 2014 03:23:57 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id D4DF99CFDBA; Wed, 27 Aug 2014 03:23:56 +0000 (UTC) From: pwendell To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request: [SPARK-3139] Made ContextCleaner to not block ... Content-Type: text/plain Message-Id: <20140827032356.D4DF99CFDBA@tyr.zones.apache.org> Date: Wed, 27 Aug 2014 03:23:56 +0000 (UTC) Github user pwendell commented on a diff in the pull request: https://github.com/apache/spark/pull/2143#discussion_r16755854 --- Diff: core/src/main/scala/org/apache/spark/ContextCleaner.scala --- @@ -76,6 +76,20 @@ private[spark] class ContextCleaner(sc: SparkContext) extends Logging { private val blockOnCleanupTasks = sc.conf.getBoolean( "spark.cleaner.referenceTracking.blocking", true) + /** + * Whether the cleaning thread will block on shuffle cleanup tasks. + * This overrides the global setting `blockOnCleanupTasks` + * + * When context cleaner is configured to block on every delete request, it can throw timeout + * exceptions on cleanup of shuffle blocks, as reported in SPARK-3139. To avoid that, this + * parameter by default disables blocking on shuffle cleanups. Note that this does not affect + * the cleanup of RDDs and broadcasts. This is intended to be a temporary workaround, + * until the real Akka issue (referred to in the comment above `blockOnCleanupTasks`) is + * resolved. + */ + private val blockOnShuffleCleanupTasks = sc.conf.getBoolean( + "spark.cleaner.referenceTracking.blocking.shuffle", false) --- End diff -- gotcha - I think it's fine as-is. This is not a user visible config anywyas. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org