Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 682151817A for ; Mon, 18 Jan 2016 17:03:40 +0000 (UTC) Received: (qmail 2786 invoked by uid 500); 18 Jan 2016 17:03:40 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 2748 invoked by uid 500); 18 Jan 2016 17:03:40 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 2735 invoked by uid 99); 18 Jan 2016 17:03:40 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Jan 2016 17:03:40 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 185772C1F5C for ; Mon, 18 Jan 2016 17:03:40 +0000 (UTC) Date: Mon, 18 Jan 2016 17:03:40 +0000 (UTC) From: "Apache Spark (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Assigned] (SPARK-11137) Make StreamingContext.stop() exception-safe MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-11137: ------------------------------------ Assignee: Apache Spark > Make StreamingContext.stop() exception-safe > ------------------------------------------- > > Key: SPARK-11137 > URL: https://issues.apache.org/jira/browse/SPARK-11137 > Project: Spark > Issue Type: Bug > Components: Streaming > Affects Versions: 1.5.1 > Reporter: Felix Cheung > Assignee: Apache Spark > Priority: Minor > > In StreamingContext.stop(), when an exception is thrown the rest of the stop/cleanup action is aborted. > Discussed in https://github.com/apache/spark/pull/9116, > srowen commented > Hm, this is getting unwieldy. There are several nested try blocks here. The same argument goes for many of these methods -- if one fails should they not continue trying? A more tidy solution would be to execute a series of () -> Unit code blocks that perform some cleanup and make sure that they each fire in succession, regardless of the others. The final one to remove the shutdown hook could occur outside synchronization. > I realize we're expanding the scope of the change here, but is it maybe worthwhile to go all the way here? > Really, something similar could be done for SparkContext and there's an existing JIRA for it somewhere. > At least, I'd prefer to either narrowly fix the deadlock here, or fix all of the finally-related issue separately and all at once. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org