spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 10110346 <>
Subject [GitHub] spark pull request #19832: [SPARK-22628][CORE]Some situations, the assignm...
Date Tue, 28 Nov 2017 11:18:10 GMT
GitHub user 10110346 opened a pull request:

    [SPARK-22628][CORE]Some situations,  the assignment of executors on workers is not what
we expected when `spark.deploy.spreadOut=true`.

    ## What changes were proposed in this pull request?
    For example, cluster has 3 workers(workA, workB, workC), workA has 1 core left, workB
has 1 core left, workC has no cores left.
    User requests 3 executors (spark.cores.max = 3, spark.executor.cores = 1), obviously,
workA will be assigned one executor ,and workB will be assigned one executor.
    After a moment,if some apps release cores, and workB has 3 core left, workC has 2 core
left, we should assign one executor on workC,not workB.
    Especially for dynamic executors allocation in standalone mode, this problem is more serious.
    This PR reorders by another key for `usableWorkers` to  solve this problem.
    ## How was this patch tested?
    Manual test

You can merge this pull request into a Git repository by running:

    $ git pull startExecutorsOnWorkers

Alternatively you can review and apply these changes as the patch at:

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19832
commit 937277844765604a8698d9f214c0006ecb7e54f8
Author: liuxian <>
Date:   2017-11-28T07:01:17Z




To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message