spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stavros Kontopoulos (JIRA)" <>
Subject [jira] [Commented] (SPARK-23423) Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors
Date Tue, 20 Feb 2018 14:04:00 GMT


Stavros Kontopoulos commented on SPARK-23423:

[~igor.berman] Thnx for supplying the info. We are aware of SPARK-19755 actually we want to
fix this for quite some time. The hardcoded value is not the way to go for sure. One note
here though.. try to avoid also the failure of the executors in the first place, even if you
tolerate more failures they shouldnt fail. You can allocate different port ranges AFIK for
your tasks or by default ports are random from what I recall. I am still curious how they
fail... maybe there is an issue there as well... 

> Application declines any offers when killed+active executors rich spark.dynamicAllocation.maxExecutors
> ------------------------------------------------------------------------------------------------------
>                 Key: SPARK-23423
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos, Spark Core
>    Affects Versions: 2.2.1
>            Reporter: Igor Berman
>            Priority: Major
>              Labels: Mesos, dynamic_allocation
> Hi
> Mesos Version:1.1.0
> I've noticed rather strange behavior of MesosCoarseGrainedSchedulerBackend when running
on Mesos with dynamic allocation on and limiting number of max executors by spark.dynamicAllocation.maxExecutors.
> Suppose we have long running driver that has cyclic pattern of resource consumption(with
some idle times in between), due to dyn.allocation it receives offers and then releases them
after current chunk of work processed.
> Since at [] the
backend compares numExecutors < executorLimit and 
> numExecutors is defined as and slaves holds all slaves
ever "met", i.e. both active and killed (see comment [] 
> On the other hand, number of taskIds should be updated due to statusUpdate, but suppose
this update is lost(actually I don't see logs of 'is now TASK_KILLED') so this number of executors
might be wrong
> I've created test that "reproduces" this behavior, not sure how good it is:
> {code:java}
> //MesosCoarseGrainedSchedulerBackendSuite
> test("max executors registered stops to accept offers when dynamic allocation enabled")
>   setBackend(Map(
>     "spark.dynamicAllocation.maxExecutors" -> "1",
>     "spark.dynamicAllocation.enabled" -> "true",
>     "spark.dynamicAllocation.testing" -> "true"))
>   backend.doRequestTotalExecutors(1)
>   val (mem, cpu) = (backend.executorMemory(sc), 4)
>   val offer1 = createOffer("o1", "s1", mem, cpu)
>   backend.resourceOffers(driver, List(offer1).asJava)
>   verifyTaskLaunched(driver, "o1")
>   backend.doKillExecutors(List("0"))
>   verify(driver, times(1)).killTask(createTaskId("0"))
>   val offer2 = createOffer("o2", "s2", mem, cpu)
>   backend.resourceOffers(driver, List(offer2).asJava)
>   verify(driver, times(1)).declineOffer(offer2.getId)
> }{code}
> Workaround: Don't set maxExecutors with dynamicAllocation on
> Please advice
> Igor
> marking you friends since you were last to touch this piece of code and probably can
advice something([~vanzin], [~skonto], [~susanxhuynh])

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message