spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matt Cheah (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-1860) Standalone Worker cleanup should not clean up running executors
Date Wed, 01 Oct 2014 16:50:33 GMT

    [ https://issues.apache.org/jira/browse/SPARK-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155097#comment-14155097
] 

Matt Cheah commented on SPARK-1860:
-----------------------------------

This might be a silly question, but are we guaranteed that the application folder will always
be labeled by appid? I looked at ExecutorRunner and it certainly generates the folder by application
ID and executor ID, but code comments in ExecutorRunner indicate it is only used by the standalone
cluster mode. Hence I didn't tie any logic to the actual naming of the folders.

> Standalone Worker cleanup should not clean up running executors
> ---------------------------------------------------------------
>
>                 Key: SPARK-1860
>                 URL: https://issues.apache.org/jira/browse/SPARK-1860
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.0.0
>            Reporter: Aaron Davidson
>            Priority: Blocker
>
> The default values of the standalone worker cleanup code cleanup all application data
every 7 days. This includes jars that were added to any executors that happen to be running
for longer than 7 days, hitting streaming jobs especially hard.
> Executor's log/data folders should not be cleaned up if they're still running. Until
then, this behavior should not be enabled by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message