spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Patrick Wendell (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-1989) Exit executors faster if they get into a cycle of heavy GC
Date Mon, 15 Sep 2014 23:13:42 GMT

     [ https://issues.apache.org/jira/browse/SPARK-1989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Patrick Wendell updated SPARK-1989:
-----------------------------------
    Fix Version/s:     (was: 1.1.0)
                   1.2.0

> Exit executors faster if they get into a cycle of heavy GC
> ----------------------------------------------------------
>
>                 Key: SPARK-1989
>                 URL: https://issues.apache.org/jira/browse/SPARK-1989
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>            Reporter: Matei Zaharia
>             Fix For: 1.2.0
>
>
> I've seen situations where an application is allocating too much memory across its tasks
+ cache to proceed, but Java gets into a cycle where it repeatedly runs full GCs, frees up
a bit of the heap, and continues instead of giving up. This then leads to timeouts and confusing
error messages. It would be better to crash with OOM sooner. The JVM has options to support
this: http://java.dzone.com/articles/tracking-excessive-garbage.
> The right solution would probably be:
> - Add some config options used by spark-submit to set XX:GCTimeLimit and XX:GCHeapFreeLimit,
with more conservative values than the defaults (e.g. 90% time limit, 5% free limit)
> - Make sure we pass these into the Java options for executors in each deployment mode



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message