spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From r...@apache.org
Subject spark git commit: Add mesos specific configurations into doc
Date Thu, 18 Dec 2014 20:16:22 GMT
Repository: spark
Updated Branches:
  refs/heads/branch-1.2 f305e7db2 -> 19efa5bf9


Add mesos specific configurations into doc

Author: Timothy Chen <tnachen@gmail.com>

Closes #3349 from tnachen/mesos_doc and squashes the following commits:

737ef49 [Timothy Chen] Add TOC
5ca546a [Timothy Chen] Update description around cores requested.
26283a5 [Timothy Chen] Add mesos specific configurations into doc

(cherry picked from commit d9956f86ad7a937c5f2cfe39eacdcbdad9356c30)
Signed-off-by: Reynold Xin <rxin@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/19efa5bf
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/19efa5bf
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/19efa5bf

Branch: refs/heads/branch-1.2
Commit: 19efa5bf9fe7c15e68f55c277d75f2767af4a697
Parents: f305e7d
Author: Timothy Chen <tnachen@gmail.com>
Authored: Thu Dec 18 12:15:53 2014 -0800
Committer: Reynold Xin <rxin@databricks.com>
Committed: Thu Dec 18 12:16:17 2014 -0800

----------------------------------------------------------------------
 docs/running-on-mesos.md | 45 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/19efa5bf/docs/running-on-mesos.md
----------------------------------------------------------------------
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 1073abb..7835849 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -2,6 +2,8 @@
 layout: global
 title: Running Spark on Mesos
 ---
+* This will become a table of contents (this text will be scraped).
+{:toc}
 
 Spark can run on hardware clusters managed by [Apache Mesos](http://mesos.apache.org/).
 
@@ -183,6 +185,49 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop).
 In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through
Mesos.
 
 
+# Configuration
+
+See the [configuration page](configuration.html) for information on Spark configurations.
 The following configs are specific for Spark on Mesos.
+
+#### Spark Properties
+
+<table class="table">
+<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
+<tr>
+  <td><code>spark.mesos.coarse</code></td>
+  <td>false</td>
+  <td>
+    Set the run mode for Spark on Mesos. For more information about the run mode, refer to
#Mesos Run Mode section above.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.mesos.extra.cores</code></td>
+  <td>0</td>
+  <td>
+    Set the extra amount of cpus to request per task. This setting is only used for Mesos
coarse grain mode.
+    The total amount of cores requested per task is the number of cores in the offer plus
the extra cores configured.
+    Note that total amount of cores the executor will request in total will not exceed the
spark.cores.max setting.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.mesos.executor.home</code></td>
+  <td>SPARK_HOME</td>
+  <td>
+    The location where the mesos executor will look for Spark binaries to execute, and uses
the SPARK_HOME setting on default.
+    This variable is only used when no spark.executor.uri is provided, and assumes Spark
is installed on the specified location
+    on each slave.
+  </td>
+</tr>
+<tr>
+  <td><code>spark.mesos.executor.memoryOverhead</code></td>
+  <td>384</td>
+  <td>
+    The amount of memory that Mesos executor will request for the task to account for the
overhead of running the executor itself.
+    The final total amount of memory allocated is the maximum value between executor memory
plus memoryOverhead, and overhead fraction (1.07) plus the executor memory.
+  </td>
+</tr>
+</table>
+
 # Troubleshooting and Debugging
 
 A few places to look during debugging:


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message