spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "lyc (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SPARK-20869) Master should clear failed apps when worker down
Date Thu, 15 Jun 2017 03:23:01 GMT

     [ https://issues.apache.org/jira/browse/SPARK-20869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

lyc updated SPARK-20869:
------------------------
    Description: 
In `Master.removeWorker`, master clears executor and driver state, but does not clear app
state. App state is cleared when received `UnregisterApplication` and when `onDisconnect`,
the first is when driver shutdown gracefully, the second is called when `netty`'s `channelInActive`
is called (which is called when channel is closed), both of which can not handle the case
when there is a network partition between master and worker.

Follow the steps in [SPARK-19900|https://issues.apache.org/jira/browse/SPARK-19900], and see
the [screenshots|https://cloud.githubusercontent.com/assets/2576762/26398697/d50735a4-40ac-11e7-80d8-6e9e1cf0b62f.png]
when worker1 partitions with master, the app `app-xxx-000` is still running instead of finished
because of worker1 is down.

cc [~CodingCat]


  was:
In `Master.removeWorker`, master clears executor and driver state, but does not clear app
state. App state is cleared when received `UnregisterApplication` and when `onDisconnect`,
the first is when driver shutdown gracefully, the second is called when `netty`'s `channelInActive`
is called (which is called when channel is closed), both of which can not handle the case
when there is a network partition between master and worker.

Follow the steps in [SPARK-19900|https://issues.apache.org/jira/browse/SPARK-19900], and see
the [screenshots|https://cloud.githubusercontent.com/assets/2576762/26398697/d50735a4-40ac-11e7-80d8-6e9e1cf0b62f.png]
when worker1 partitions with master, the app `app-xxx-000` is still running instead of finished
because of worker1 is down.

cc [~CodingCat]
@lyc


> Master should clear failed apps when worker down
> ------------------------------------------------
>
>                 Key: SPARK-20869
>                 URL: https://issues.apache.org/jira/browse/SPARK-20869
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.3.0
>            Reporter: lyc
>            Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In `Master.removeWorker`, master clears executor and driver state, but does not clear
app state. App state is cleared when received `UnregisterApplication` and when `onDisconnect`,
the first is when driver shutdown gracefully, the second is called when `netty`'s `channelInActive`
is called (which is called when channel is closed), both of which can not handle the case
when there is a network partition between master and worker.
> Follow the steps in [SPARK-19900|https://issues.apache.org/jira/browse/SPARK-19900],
and see the [screenshots|https://cloud.githubusercontent.com/assets/2576762/26398697/d50735a4-40ac-11e7-80d8-6e9e1cf0b62f.png]
when worker1 partitions with master, the app `app-xxx-000` is still running instead of finished
because of worker1 is down.
> cc [~CodingCat]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message