spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Luca Bruno (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-14977) Fine grained mode in Mesos is not fair
Date Thu, 28 Apr 2016 09:24:13 GMT

     [ https://issues.apache.org/jira/browse/SPARK-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Luca Bruno updated SPARK-14977:
-------------------------------
    Description: 
I've setup a mesos cluster and I'm running spark in fine grained mode.
Spark defaults to 2 executor cores and 2gb of ram.
The total mesos cluster has 8 cores and 8gb of ram.

When I submit two spark jobs simultaneously, spark will always accept full resources, leading
the two frameworks to use 4gb of ram each instead of 2gb.

If I submit another spark job, it will not get offered resources from mesos, at least using
the default HierarchicalDRF allocator module.
Mesos will keep offering 4gb of ram to earlier spark jobs, and spark keeps accepting full
resources for every new task.

Hence new spark jobs have no chance of getting a share.

Is this something to be solved with a custom mesos allocator? Or spark should be more fair
instead? Or maybe provide a configuration option to always accept with the minimum resources?

  was:
I've setup a mesos cluster and I'm running spark in fine grained mode.
Spark defaults to 2 executor cores and 2gb of ram.
The total mesos cluster has 8 cores and 8gb of ram.

When I submit two spark jobs simultaneously, spark will always accept full resources, leading
the two frameworks to use 4gb of ram each instead of 2gb.

If I submit another spark job, it will not get offered resources from mesos, at least using
the default HierarchicalDRF allocator module.
Mesos will keep offering 4gb of ram to earlier spark jobs, and spark keeps accepting full
resources for every new task.

Hence new spark jobs have no chance of getting a share.

Is this something to be solved with a custom mesos allocator? Or spark should be more fair
instead? Or maybe provide an configuration option to always accept with the minimum resources?


> Fine grained mode in Mesos is not fair
> --------------------------------------
>
>                 Key: SPARK-14977
>                 URL: https://issues.apache.org/jira/browse/SPARK-14977
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 2.1.0
>         Environment: Spark commit db75ccb, Debian jessie, Mesos fine grained
>            Reporter: Luca Bruno
>
> I've setup a mesos cluster and I'm running spark in fine grained mode.
> Spark defaults to 2 executor cores and 2gb of ram.
> The total mesos cluster has 8 cores and 8gb of ram.
> When I submit two spark jobs simultaneously, spark will always accept full resources,
leading the two frameworks to use 4gb of ram each instead of 2gb.
> If I submit another spark job, it will not get offered resources from mesos, at least
using the default HierarchicalDRF allocator module.
> Mesos will keep offering 4gb of ram to earlier spark jobs, and spark keeps accepting
full resources for every new task.
> Hence new spark jobs have no chance of getting a share.
> Is this something to be solved with a custom mesos allocator? Or spark should be more
fair instead? Or maybe provide a configuration option to always accept with the minimum resources?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message