spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jiangxb1987 <...@git.apache.org>
Subject [GitHub] spark pull request #19832: [SPARK-22628][CORE]Some situations´╝î the assignm...
Date Tue, 28 Nov 2017 15:29:25 GMT
Github user jiangxb1987 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19832#discussion_r153526390
  
    --- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala ---
    @@ -671,10 +671,23 @@ private[deploy] class Master(
           // If the cores left is less than the coresPerExecutor,the cores left will not
be allocated
           if (app.coresLeft >= coresPerExecutor) {
             // Filter out workers that don't have enough resources to launch an executor
    -        val usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)
    +        var usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)
               .filter(worker => worker.memoryFree >= app.desc.memoryPerExecutorMB &&
                 worker.coresFree >= coresPerExecutor)
               .sortBy(_.coresFree).reverse
    +
    +        if (spreadOutApps) {
    --- End diff --
    
    IMO the `spreadOutApps` only guarantee we perform round-robin scheduling across the nodes
during schedule, that don't guarantee we generate a perfect uniformed distribution of executors
on workers. This change is aimed to a corner case and I'm hesitant to believe this worth the
imported complexity. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message