spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Thomas Graves (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-16630) Blacklist a node if executors won't launch on it.
Date Tue, 10 Apr 2018 13:24:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-16630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432244#comment-16432244
] 

Thomas Graves commented on SPARK-16630:
---------------------------------------

yes I think it would make sense as the union of all blacklisted nodes.

I'm not sure what you mean by your last question.  The expiry currently is all handled in
the BlacklistTracker, I wouldn't want to move that out into the yarn allocator.  Just use
the information passed to it unless there is a case it doesn't cover?

> Blacklist a node if executors won't launch on it.
> -------------------------------------------------
>
>                 Key: SPARK-16630
>                 URL: https://issues.apache.org/jira/browse/SPARK-16630
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>    Affects Versions: 1.6.2
>            Reporter: Thomas Graves
>            Priority: Major
>
> On YARN, its possible that a node is messed or misconfigured such that a container won't
launch on it.  For instance if the Spark external shuffle handler didn't get loaded on it
, maybe its just some other hardware issue or hadoop configuration issue. 
> It would be nice we could recognize this happening and stop trying to launch executors
on it since that could end up causing us to hit our max number of executor failures and then
kill the job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message