spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kayousterhout <>
Subject [GitHub] spark pull request: [SPARK-3466] Limit size of results that a driv...
Date Thu, 30 Oct 2014 20:57:44 GMT
Github user kayousterhout commented on a diff in the pull request:
    --- Diff: docs/ ---
    @@ -112,6 +112,18 @@ of the most common options to set are:
    +  <td><code>spark.driver.maxResultSize</code></td>
    +  <td>1g</td>
    +  <td>
    +    Limit of total size of serialized bytes of all partitions for each Spark action (e.g.
    +    it should be at least 1M or 0 (means unlimited). The stage will be aborted if the
total size
    +    go above this limit. 
    +    Having high limit may cause out-of-memory errors in driver (depends on spark.driver.memory
    +    and memory overhead of objects in JVM). Set a proper limit can protect driver from
    --- End diff --
    "Set a proper limit can protect driver" --> "Setting a proper limit can protect the

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message