spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <>
Subject [jira] [Commented] (SPARK-6980) Akka timeout exceptions indicate which conf controls them
Date Wed, 20 May 2015 19:55:00 GMT


Apache Spark commented on SPARK-6980:

User 'hardmettle' has created a pull request for this issue:

> Akka timeout exceptions indicate which conf controls them
> ---------------------------------------------------------
>                 Key: SPARK-6980
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: Imran Rashid
>            Assignee: Harsh Gupta
>            Priority: Minor
>              Labels: starter
>         Attachments: Spark-6980-Test.scala
> If you hit one of the akka timeouts, you just get an exception like
> {code}
> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
> {code}
> The exception doesn't indicate how to change the timeout, though there is usually (always?)
a corresponding setting in {{SparkConf}} .  It would be nice if the exception including the
relevant setting.
> I think this should be pretty easy to do -- we just need to create something like a {{NamedTimeout}}.
 It would have its own {{await}} method, catches the akka timeout and throws its own exception.
 We should change {{RpcUtils.askTimeout}} and {{RpcUtils.lookupTimeout}} to always give a
{{NamedTimeout}}, so we can be sure that anytime we have a timeout, we get a better exception.
> Given the latest refactoring to the rpc layer, this needs to be done in both {{AkkaUtils}}
and {{AkkaRpcEndpoint}}.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message