spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-9111) Dumping the memory info when an executor dies abnormally
Date Thu, 16 Jul 2015 17:05:04 GMT

     [ https://issues.apache.org/jira/browse/SPARK-9111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Sean Owen updated SPARK-9111:
-----------------------------
    Priority: Minor  (was: Major)

HeapDumpOnOutOfMemoryError ? Only issue with this is that it dumps huge files to local disk
which aren't necessarily cleaned up. Can this be disabled by default please? Actually, you
can add this fairly easily for debugging as a user, right?

> Dumping the memory info when an executor dies abnormally
> --------------------------------------------------------
>
>                 Key: SPARK-9111
>                 URL: https://issues.apache.org/jira/browse/SPARK-9111
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Spark Core
>            Reporter: Zhang, Liye
>            Priority: Minor
>
> When an executor is not normally finished, we shall give out it's memory dump info right
before the JVM shutting down. So that if the executor is killed because of OOM, we can easily
checkout how is the memory used and which part cause the OOM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message