spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From a...@apache.org
Subject spark git commit: [SPARK-5073] spark.storage.memoryMapThreshold have two default value
Date Sun, 11 Jan 2015 21:50:58 GMT
Repository: spark
Updated Branches:
  refs/heads/master 331326090 -> 1656aae2b


[SPARK-5073] spark.storage.memoryMapThreshold have two default value

Because major OS page sizes is about 4KB, the default value of spark.storage.memoryMapThreshold
is integrated to 2 * 4096

Author: lewuathe <lewuathe@me.com>

Closes #3900 from Lewuathe/integrate-memoryMapThreshold and squashes the following commits:

e417acd [lewuathe] [SPARK-5073] Update docs/configuration
834aba4 [lewuathe] [SPARK-5073] Fix style
adcea33 [lewuathe] [SPARK-5073] Integrate memory map threshold to 2MB
fcce2e5 [lewuathe] [SPARK-5073] spark.storage.memoryMapThreshold have two default value


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1656aae2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1656aae2
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1656aae2

Branch: refs/heads/master
Commit: 1656aae2b4e8b026f8cfe782519f72d32ed2b291
Parents: 3313260
Author: lewuathe <lewuathe@me.com>
Authored: Sun Jan 11 13:50:42 2015 -0800
Committer: Aaron Davidson <aaron@databricks.com>
Committed: Sun Jan 11 13:50:42 2015 -0800

----------------------------------------------------------------------
 core/src/main/scala/org/apache/spark/storage/DiskStore.scala | 3 ++-
 docs/configuration.md                                        | 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/1656aae2/core/src/main/scala/org/apache/spark/storage/DiskStore.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/storage/DiskStore.scala b/core/src/main/scala/org/apache/spark/storage/DiskStore.scala
index 8dadf67..61ef5ff 100644
--- a/core/src/main/scala/org/apache/spark/storage/DiskStore.scala
+++ b/core/src/main/scala/org/apache/spark/storage/DiskStore.scala
@@ -31,7 +31,8 @@ import org.apache.spark.util.Utils
 private[spark] class DiskStore(blockManager: BlockManager, diskManager: DiskBlockManager)
   extends BlockStore(blockManager) with Logging {
 
-  val minMemoryMapBytes = blockManager.conf.getLong("spark.storage.memoryMapThreshold", 2
* 4096L)
+  val minMemoryMapBytes = blockManager.conf.getLong(
+    "spark.storage.memoryMapThreshold", 2 * 1024L * 1024L)
 
   override def getSize(blockId: BlockId): Long = {
     diskManager.getFile(blockId.name).length

http://git-wip-us.apache.org/repos/asf/spark/blob/1656aae2/docs/configuration.md
----------------------------------------------------------------------
diff --git a/docs/configuration.md b/docs/configuration.md
index 2add485..f292bfb 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -678,7 +678,7 @@ Apart from these, the following properties are also available, and may
be useful
 </tr>
 <tr>
   <td><code>spark.storage.memoryMapThreshold</code></td>
-  <td>8192</td>
+  <td>2097152</td>
   <td>
     Size of a block, in bytes, above which Spark memory maps when reading a block from disk.
     This prevents Spark from memory mapping very small blocks. In general, memory


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


Mime
View raw message