spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lijie Xu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-12554) Standalone app scheduler will hang when app.coreToAssign < minCoresPerExecutor
Date Tue, 29 Dec 2015 11:43:49 GMT

     [ https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Lijie Xu updated SPARK-12554:
-----------------------------
    Description: 
In scheduleExecutorsOnWorker() in Master.scala,
*val keepScheduling = coresToAssign >= minCoresPerExecutor* should be changed to *val keepScheduling
= coresToAssign > 0*

Suppose that an app's requested cores is 10 (i.e., spark.cores.max = 10) and app.coresPerExecutor
is 4 (i.e., spark.executor.cores = 4). 

After allocating two executors (each has 4 cores) to this app, the *coresToAssign = 2* and
*minCoresPerExecutor = coresPerExecutor = 4*, so *keepScheduling = false* and no extra executor
will be allocated to the app. If *spark.scheduler.minRegisteredResourcesRatio* is set to a
large number (e.g., > 0.8 in this case), the app will hang and never finish.

In particular, if a small app's coresPerExecutor is larger than its requested cores (e.g.,
spark.cores.max = 10, spark.executor.cores = 16), *val keepScheduling = coresToAssign >=
minCoresPerExecutor* is always FALSE. As a result, this app will hang and never finish.



  was:
In scheduleExecutorsOnWorker() in Master.scala,
*val keepScheduling = coresToAssign >= minCoresPerExecutor* should be changed to *val keepScheduling
= coresToAssign > 0*

Suppose that an app's requested cores is 10 (i.e., spark.cores.max = 10) and app.coresPerExecutor
is 4 (i.e., spark.executor.cores = 4). 

After allocating two executors (each has 4 cores) to the app, the *coresToAssign = 2* and
*minCoresPerExecutor = coresPerExecutor = 4*, so *keepScheduling = false* and no extra executor
will be allocated to the app. If *spark.scheduler.minRegisteredResourcesRatio* is set to a
large number (e.g., > 0.8 in this case), the app will hang and never finish.

In particular, if a small app's coresPerExecutor is larger than its requested cores (e.g.,
spark.cores.max = 10, spark.executor.cores = 16), *val keepScheduling = coresToAssign >=
minCoresPerExecutor* is always FALSE. As a result, this app will hang and never finish.




> Standalone app scheduler will hang when app.coreToAssign < minCoresPerExecutor
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-12554
>                 URL: https://issues.apache.org/jira/browse/SPARK-12554
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy, Scheduler
>    Affects Versions: 1.5.2
>            Reporter: Lijie Xu
>            Priority: Critical
>
> In scheduleExecutorsOnWorker() in Master.scala,
> *val keepScheduling = coresToAssign >= minCoresPerExecutor* should be changed to *val
keepScheduling = coresToAssign > 0*
> Suppose that an app's requested cores is 10 (i.e., spark.cores.max = 10) and app.coresPerExecutor
is 4 (i.e., spark.executor.cores = 4). 
> After allocating two executors (each has 4 cores) to this app, the *coresToAssign = 2*
and *minCoresPerExecutor = coresPerExecutor = 4*, so *keepScheduling = false* and no extra
executor will be allocated to the app. If *spark.scheduler.minRegisteredResourcesRatio* is
set to a large number (e.g., > 0.8 in this case), the app will hang and never finish.
> In particular, if a small app's coresPerExecutor is larger than its requested cores (e.g.,
spark.cores.max = 10, spark.executor.cores = 16), *val keepScheduling = coresToAssign >=
minCoresPerExecutor* is always FALSE. As a result, this app will hang and never finish.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message