spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "nirav patel (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-19090) Dynamic Resource Allocation not respecting spark.executor.cores
Date Tue, 10 Jan 2017 17:26:58 GMT

    [ https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15815580#comment-15815580
] 

nirav patel commented on SPARK-19090:
-------------------------------------

I tried passing spark parameters via oozie directly using   <spark-opts>--conf spark.executor.cores=5</spark-opts>
which passes those parameters to  : org.apache.spark.deploy.SparkSubmit and then SparkSubmit
subsequently calls my application class. In this scenario I can see that dynamic executor
scheduling picking up this value and using it to request 5 vcores per executor. SO I think
that's my workaround. 
Real issue seems that spark dynamic scheduling module is ignoring sparkConf (spark.executor.cores)
parameter set by user application class. It's recognizing all other parameters! Because I
am not passing any other parameter directly to spark submit those are all set via my application
code as shown in my code snippet. It's only ignoring spark.executor.cores set at application
level which is weird. If I have read document correctly one can always override spark command
line parameter via application level SparkConf object. It definitely works when dynamic scheduling
is turned off . It only doesn't work when its on. 

> Dynamic Resource Allocation not respecting spark.executor.cores
> ---------------------------------------------------------------
>
>                 Key: SPARK-19090
>                 URL: https://issues.apache.org/jira/browse/SPARK-19090
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.5.2, 1.6.1, 2.0.1
>            Reporter: nirav patel
>
> When enabling dynamic scheduling with yarn I see that all executors are using only 1
core even if I specify "spark.executor.cores" to 6. If dynamic scheduling is disabled then
each executors will have 6 cores. i.e. it respects  "spark.executor.cores". I have tested
this against spark 1.5 . I think it will be the same behavior with 2.x as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message