spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pwendell <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-3535][Mesos] Add 15% task memory overhe...
Date Tue, 16 Sep 2014 22:07:46 GMT
Github user pwendell commented on the pull request:

    https://github.com/apache/spark/pull/2401#issuecomment-55821489
  
    Hey will this have compatbility issues for existing deployments? I know many clusters
where they just have Spark request the entire amount of memory on the node. With this, if
a user upgrades their jobs could just starve. What if instead we just "scale down" the size
of the executor based on what the user requests. I.e. if they request 20GB executors we reserve
a few GB for this overhead. @andrewor14 how does this work in YARN? It might be good to have
similar semantics to what they have there.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message