spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mridul Muralidharan (JIRA)" <>
Subject [jira] [Commented] (SPARK-1706) Allow multiple executors per worker in Standalone mode
Date Sun, 04 May 2014 00:38:14 GMT


Mridul Muralidharan commented on SPARK-1706:

Oh my, this was supposed to be logical addition once yarn changes were done.
Yarn changes were very heavily modelled on standalone mode (hence why yarn-standalone !) :
and it was supposed to be a two way street : changes made for yarn support (multi-tennancy,
etc) was supposed to have been added back to standalone mode when yarn support stabilized.
Did not realize I never got around to it - my apologies !

> Allow multiple executors per worker in Standalone mode
> ------------------------------------------------------
>                 Key: SPARK-1706
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Deploy
>            Reporter: Patrick Wendell
>             Fix For: 1.1.0
> Right now if people want to launch multiple executors on each machine they need to start
multiple standalone workers. This is not too difficult, but it means you have extra JVM's
sitting around.
> We should just allow users to set a number of cores they want per-executor in standalone
mode and then allow packing multiple executors on each node. This would make standalone mode
more consistent with YARN in the way you request resources.
> It's not too big of a change as far as I can see. You'd need to:
> 1. Introduce a configuration for how many cores you want per executor.
> 2. Change the scheduling logic in Master.scala to take this into account.
> 3. Change CoarseGrainedSchedulerBackend to not assume a 1<->1 correspondence between
hosts and executors.
> And maybe modify a few other places.

This message was sent by Atlassian JIRA

View raw message