spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dragos <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-6287][MESOS] Add dynamic allocation to ...
Date Mon, 11 May 2015 16:31:49 GMT
Github user dragos commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4984#discussion_r30055643
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
    @@ -281,4 +319,38 @@ private[spark] class CoarseMesosSchedulerBackend(
           super.applicationId
         }
     
    +  override def doRequestTotalExecutors(requestedTotal: Int): Boolean = {
    +    // We don't truly know if we can fulfill the full amount of executors
    +    // since at coarse grain it depends on the amount of slaves available.
    +    logInfo("Capping the total amount of executors to " + requestedTotal)
    +    executorLimitOption = Option(requestedTotal)
    +    true
    +  }
    +
    +  override def doKillExecutors(executorIds: Seq[String]): Boolean = {
    +    if (mesosDriver == null) {
    +      logWarning("Asked to kill executors before the executor was started.")
    +      return false
    +    }
    +
    +    val slaveIdToTaskId = taskIdToSlaveId.inverse()
    +    for (executorId <- executorIds) {
    +      val slaveId = executorId.split("/")(0)
    +      if (slaveIdToTaskId.contains(slaveId)) {
    +        mesosDriver.killTask(
    +          TaskID.newBuilder().setValue(slaveIdToTaskId.get(slaveId).toString).build)
    +        pendingRemovedSlaveIds += slaveId
    +      } else {
    +        logWarning("Unable to find executor Id '" + executorId + "' in Mesos scheduler")
    +      }
    +    }
    +
    +    // We cannot simply decrement from the existing executor limit as we may not able
to
    +    // launch as much executors as the limit. But we assume if we are notified to kill
    +    // executors, that means the scheduler wants to set the limit that is less than
    +    // the amount of the executors that has been launched. Therefore, we take the existing
    +    // amount of executors launched and deduct the executors killed as the new limit.
    +    executorLimitOption = Option(taskIdToSlaveId.size - pendingRemovedSlaveIds.size)
    --- End diff --
    
    Oh, I forgot to push my latest changes. Yes, it's `max` now, as you suggested


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message