spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Apache Spark (JIRA)" <>
Subject [jira] [Assigned] (SPARK-15725) Dynamic allocation hangs YARN app when executors time out
Date Thu, 02 Jun 2016 23:22:59 GMT


Apache Spark reassigned SPARK-15725:

    Assignee:     (was: Apache Spark)

> Dynamic allocation hangs YARN app when executors time out
> ---------------------------------------------------------
>                 Key: SPARK-15725
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.6.1, 2.0.0
>            Reporter: Ryan Blue
> We've had a problem with a dynamic allocation and YARN (since 1.6) where a large stage
will cause a lot of executors to get killed around the same time, causing the driver and AM
to lock up and wait forever. This can happen even with a small number of executors (~100).
> When executors are killed by the driver, the [network connection to the driver disconnects|].
That results in a call to the AM to find out why the executor died, followed by a [blocking
and retrying `RemoveExecutor` RPC call|]
that results in a second `KillExecutor` call to the AM. When a lot of executors are killed
around the same time, the driver's AM threads are all taken up blocking and waiting on the
AM (see the stack trace below, which was the same for 42 threads). I think this behavior,
the network disconnect and subsequent cleanup, is unique to YARN.
> {code:title=Driver AM thread stack}
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.parkNanos(
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(
> scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:208)
> scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:218)
> scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
> scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
> scala.concurrent.Await$.result(package.scala:190)
> org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:81)
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
> org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
> org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$2.apply$mcV$sp(YarnSchedulerBackend.scala:286)
> org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$2.apply(YarnSchedulerBackend.scala:286)
> org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint$$anonfun$receiveAndReply$1$$anonfun$applyOrElse$2.apply(YarnSchedulerBackend.scala:286)
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
> scala.concurrent.impl.Future$
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> java.util.concurrent.ThreadPoolExecutor$
> {code}
> The RPC calls to the AM aren't returning because the `YarnAllocator` is spending all
of its time in the `allocateResources` method. That class's public methods are synchronized
so only one RPC can be satisfied at a time. The reason why it is constantly calling `allocateResources`
is because [its thread|]
is [woken up|]
by calls to get the failure reason for an executor -- which is part of the chain of events
in the driver for each executor that goes down.
> The final result is that the `YarnAllocator` doesn't respond to RPC calls for long enough
that calls time out and replies for non-blocking calls are dropped. Then the application is
unable to do any work because everything retries or exits and the application *hangs for 24+
hours*, until enough errors accumulate that it dies.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message