spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <>
Subject [jira] [Assigned] (SPARK-11137) Make StreamingContext.stop() exception-safe
Date Mon, 18 Jan 2016 17:03:40 GMT


Apache Spark reassigned SPARK-11137:

    Assignee: Apache Spark

> Make StreamingContext.stop() exception-safe
> -------------------------------------------
>                 Key: SPARK-11137
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: 1.5.1
>            Reporter: Felix Cheung
>            Assignee: Apache Spark
>            Priority: Minor
> In StreamingContext.stop(), when an exception is thrown the rest of the stop/cleanup
action is aborted.
> Discussed in,
> srowen commented
> Hm, this is getting unwieldy. There are several nested try blocks here. The same argument
goes for many of these methods -- if one fails should they not continue trying? A more tidy
solution would be to execute a series of () -> Unit code blocks that perform some cleanup
and make sure that they each fire in succession, regardless of the others. The final one to
remove the shutdown hook could occur outside synchronization.
> I realize we're expanding the scope of the change here, but is it maybe worthwhile to
go all the way here?
> Really, something similar could be done for SparkContext and there's an existing JIRA
for it somewhere.
> At least, I'd prefer to either narrowly fix the deadlock here, or fix all of the finally-related
issue separately and all at once.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message