spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mkhaitman <>
Subject Spark.Executor.Cores question
Date Fri, 23 Oct 2015 19:05:55 GMT
Regarding the 'spark.executor.cores' config option in a Standalone spark
environment, I'm curious about whether there's a way to enforce the
following logic:

*- Max cores per executor = 4*
** Max executors PER application PER worker = 1*

In order to force better balance across all workers, I want to ensure that a
single spark job can only ever use a specific upper limit on the number of
cores for each executor it holds, however, do not want a situation where it
can spawn 3 executors on a worker and only 1/2 on the others. Some spark
jobs end up using much more memory during aggregation tasks (joins /
groupBy's) which is more heavily impacted by the number of cores per
executor for that job. 

If this kind of setup/configuration doesn't already exist for Spark, and
others see the benefit of what I mean by this, where would be the best
location to insert this logic?


View this message in context:
Sent from the Apache Spark Developers List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message