spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Miles Crawford (JIRA)" <>
Subject [jira] [Commented] (SPARK-14209) Application failure during preemption.
Date Fri, 01 Apr 2016 21:47:25 GMT


Miles Crawford commented on SPARK-14209:

Can you be a bit more specific about the default log configuration from spark?

All we're doing in terms of logging is placing a logback.xml file into our classpath that
sets the console logger to level INFO...

> Application failure during preemption.
> --------------------------------------
>                 Key: SPARK-14209
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager
>    Affects Versions: 1.6.1
>         Environment: Spark on YARN
>            Reporter: Miles Crawford
> We have a fair-sharing cluster set up, including the external shuffle service.  When
a new job arrives, existing jobs are successfully preempted down to fit.
> A spate of these messages arrives:
> 	ExecutorLostFailure (executor 48 exited unrelated to the running tasks) Reason: Container
container_1458935819920_0019_01_000143 on host:
was preempted.
> This seems fine - the problem is that soon thereafter, our whole application fails because
it is unable to fetch blocks from the pre-empted containers:
> Failed to fetch block from 1 locations.
Most recent failure cause:
>     Caused by: Failed to connect to
>         Caused by: Connection refused:
> Full stack:
> Spark does not attempt to recreate these blocks - the tasks simply fail over and over
until the maxTaskAttempts value is reached.
> It appears to me that there is some fault in the way preempted containers are being handled
- shouldn't these blocks be recreated on demand?

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message