spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ash211 <>
Subject [GitHub] spark pull request: Add mesos specific configurations into doc
Date Tue, 18 Nov 2014 23:40:19 GMT
Github user ash211 commented on a diff in the pull request:
    --- Diff: docs/ ---
    @@ -183,6 +183,47 @@ node. Please refer to [Hadoop on Mesos](
     In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through
    +# Configuration
    +See the [configuration page](configuration.html) for information on Spark configurations.
 The following configs are specific for Spark on Mesos.
    +#### Spark Properties
    +<table class="table">
    +<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
    +  <td><code>spark.mesos.coarse</code></td>
    +  <td>false</td>
    +  <td>
    +    Set the run mode for Spark on Mesos. For more information about the run mode, refer
to #Mesos Run Mode section above.
    +  </td>
    +  <td><code>spark.mesos.extra.cores</code></td>
    +  <td>0</td>
    +  <td>
    +    Set the extra amount of cpus to request per task.
    --- End diff --
    Is this setting for both coarse and fine grained modes?
    Also can you provide a formula that produces the total number of cores requested?  From
what you have now I'm thinking something like:
    `totalCoresPerExecutor = numTasks + extraCores`
    This would be similar to the formula for memoryOverhead below

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message