spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From pwendell <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-1777] Prevent OOMs from single partitio...
Date Tue, 22 Jul 2014 08:00:04 GMT
Github user pwendell commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1165#discussion_r15214933
  
    --- Diff: core/src/main/scala/org/apache/spark/CacheManager.scala ---
    @@ -140,14 +145,36 @@ private[spark] class CacheManager(blockManager: BlockManager) extends
Logging {
               throw new BlockException(key, s"Block manager failed to return cached value
for $key!")
           }
         } else {
    -      /* This RDD is to be cached in memory. In this case we cannot pass the computed
values
    +      /*
    +       * This RDD is to be cached in memory. In this case we cannot pass the computed
values
            * to the BlockManager as an iterator and expect to read it back later. This is
because
    -       * we may end up dropping a partition from memory store before getting it back,
e.g.
    -       * when the entirety of the RDD does not fit in memory. */
    -      val elements = new ArrayBuffer[Any]
    -      elements ++= values
    -      updatedBlocks ++= blockManager.put(key, elements, storageLevel, tellMaster = true)
    -      elements.iterator.asInstanceOf[Iterator[T]]
    +       * we may end up dropping a partition from memory store before getting it back.
    +       *
    +       * In addition, we must be careful to not unroll the entire partition in memory
at once.
    +       * Otherwise, we may cause an OOM exception if the JVM does not have enough space
for this
    +       * single partition. Instead, we unroll the values cautiously, potentially aborting
and
    +       * dropping the partition to disk if applicable.
    +       */
    +      blockManager.memoryStore.unrollSafely(key, values, updatedBlocks) match {
    +        case Left(arr) =>
    +          // We have successfully unrolled the entire partition, so cache it in memory
    +          updatedBlocks ++=
    +            blockManager.putArray(key, arr, level, tellMaster = true, effectiveStorageLevel)
    +          arr.iterator.asInstanceOf[Iterator[T]]
    +        case Right(it) =>
    +          // There is not enough space to cache this partition in memory
    +          logWarning(s"Not enough space to cache $key in memory! " +
    +            s"Free memory is ${blockManager.memoryStore.freeMemory} bytes.")
    +          var returnValues = it.asInstanceOf[Iterator[T]]
    --- End diff --
    
    can you restructure this to not use a val?:
    
    ```
    if (putLevel.useDisk) {
      logWarning(s"Persisting $key to disk instead.")
      diskOnlyLevel = StorageLevel(...)
      putInBlockManager[T](key, returnValues, level, updatedBlocks, Some(diskOnlyLevel))
    } else {
      it.asInstanceOf[Iterator[T]]
    }
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message