spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "nirav patel (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-19090) Dynamic Resource Allocation not respecting spark.executor.cores
Date Tue, 10 Jan 2017 05:39:58 GMT

    [ https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813962#comment-15813962
] 

nirav patel edited comment on SPARK-19090 at 1/10/17 5:39 AM:
--------------------------------------------------------------

Oh right, I have them set exclusively. I corrected my previous comment. I verified that dynamic
allocation was enabled by checking following in driver logs:

[spark-dynamic-executor-allocation] org.apache.spark.ExecutorAllocationManager: Requesting
4 new executors because tasks are backlogged (new desired total will be 6)

If it was not enabled then it should have actually create 6 executors with 5 cores. 

here's the snippet of code I have:

      if(sparkConfig.dynamicAllocation){
				sparkConf.set("spark.dynamicAllocation.enabled", "true")
				sparkConf.set("spark.dynamicAllocation.executorIdleTimeout", "600s")
				sparkConf.set("spark.dynamicAllocation.initialExecutors", sparkConfig.executorInstances)
				sparkConf.set("spark.dynamicAllocation.minExecutors", String.valueOf((Integer.valueOf(sparkConfig.executorInstances)
- 3)))
				sparkConf.set("spark.dynamicAllocation.sustainedSchedulerBacklogTimeout", "300s")
				sparkConf.set("spark.dynamicAllocation.schedulerBacklogTimeout", "120")

			} else {
			  sparkConf.set("spark.executor.instances", sparkConfig.executorInstances)
			}

		  sparkConf.set("spark.executor.cores", sparkConfig.executorCores)


was (Author: tenstriker):
Oh right, I have them set exclusively. I corrected my previous comment. I verified that dynamic
allocation was enabled by checking following in driver logs:
2017-01-04 12:04:11,362 INFO [spark-dynamic-executor-allocation] org.apache.spark.ExecutorAllocationManager:
Requesting 4 new executors because tasks are backlogged (new desired total will be 6)

If it was not enabled then it should have actually create 6 executors with 5 cores. 

here's the snippet of code I have:

      if(sparkConfig.dynamicAllocation){
				sparkConf.set("spark.dynamicAllocation.enabled", "true")
				sparkConf.set("spark.dynamicAllocation.executorIdleTimeout", "600s")
				sparkConf.set("spark.dynamicAllocation.initialExecutors", sparkConfig.executorInstances)
				sparkConf.set("spark.dynamicAllocation.minExecutors", String.valueOf((Integer.valueOf(sparkConfig.executorInstances)
- 3)))
				sparkConf.set("spark.dynamicAllocation.sustainedSchedulerBacklogTimeout", "300s")
				sparkConf.set("spark.dynamicAllocation.schedulerBacklogTimeout", "120")

			} else {
			  sparkConf.set("spark.executor.instances", sparkConfig.executorInstances)
			}

		  sparkConf.set("spark.executor.cores", sparkConfig.executorCores)

> Dynamic Resource Allocation not respecting spark.executor.cores
> ---------------------------------------------------------------
>
>                 Key: SPARK-19090
>                 URL: https://issues.apache.org/jira/browse/SPARK-19090
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.5.2, 1.6.1, 2.0.1
>            Reporter: nirav patel
>
> When enabling dynamic scheduling with yarn I see that all executors are using only 1
core even if I specify "spark.executor.cores" to 6. If dynamic scheduling is disabled then
each executors will have 6 cores. i.e. it respects  "spark.executor.cores". I have tested
this against spark 1.5 . I think it will be the same behavior with 2.x as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message