spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mgummelt <...@git.apache.org>
Subject [GitHub] spark issue #11157: [SPARK-11714][Mesos] Make Spark on Mesos honor port rest...
Date Fri, 05 Aug 2016 00:02:58 GMT
Github user mgummelt commented on the issue:

    https://github.com/apache/spark/pull/11157
  
    > How about the rest of the ports?
    Unused resources in an offer are implicitly declined.
    > We will have only one executor per job right?
    if they specify a port, then one executor per node per job, yes
    > Right now I get conflicts without the isolator in my local tests.
    what conflicts?
    > So why consume all port resources?
    See above.  Unused resources in an offer are implicitly declined.  So if we launch a task
using a single port, we're implicitly declining all other ports.
    
    > Moreover if we have the isolator enabled I think we should allow task creation independently
of a port offer
    
    That's a good point.  I spoke with our networking team, and unfortunately there's no way
for us to discover if IP-per-container is enabled, so we'd have to expose a spark.mesos.ip-per-container
or something, to prevent using port offers.  I think this is an orthogonal problem though,
since we don't even support CNI yet.  Let's solve it in a separate PR.
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message