spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From joshro...@apache.org
Subject spark git commit: Add comment about defaultMinPartitions
Date Sun, 25 Jan 2015 19:29:41 GMT
Repository: spark
Updated Branches:
  refs/heads/master d22ca1e92 -> 412a58e11


Add comment about defaultMinPartitions

Added a comment about using math.min for choosing default partition count

Author: Idan Zalzberg <idanzalz@gmail.com>

Closes #4102 from idanz/patch-2 and squashes the following commits:

50e9d58 [Idan Zalzberg] Update SparkContext.scala


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/412a58e1
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/412a58e1
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/412a58e1

Branch: refs/heads/master
Commit: 412a58e118ef083ea1d1d6daccd9c531852baf53
Parents: d22ca1e
Author: Idan Zalzberg <idanzalz@gmail.com>
Authored: Sun Jan 25 11:28:05 2015 -0800
Committer: Josh Rosen <joshrosen@databricks.com>
Committed: Sun Jan 25 11:28:05 2015 -0800

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/SparkContext.scala | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/412a58e1/core/src/main/scala/org/apache/spark/SparkContext.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala b/core/src/main/scala/org/apache/spark/SparkContext.scala
index 8175d17..4c4ee04 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -1514,7 +1514,11 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli
   @deprecated("use defaultMinPartitions", "1.0.0")
   def defaultMinSplits: Int = math.min(defaultParallelism, 2)
 
-  /** Default min number of partitions for Hadoop RDDs when not given by user */
+  /** 
+   * Default min number of partitions for Hadoop RDDs when not given by user 
+   * Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
+   * The reasons for this are discussed in https://github.com/mesos/spark/pull/718
+   */
   def defaultMinPartitions: Int = math.min(defaultParallelism, 2)
 
   private val nextShuffleId = new AtomicInteger(0)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message