spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stavros Kontopoulos (JIRA)" <>
Subject [jira] [Issue Comment Deleted] (SPARK-23485) Kubernetes should support node blacklist
Date Thu, 22 Feb 2018 23:06:00 GMT


Stavros Kontopoulos updated SPARK-23485:
    Comment: was deleted

(was: [~liyinan926] I understand the default behavior of the kubernetes scheduler (it makes
the decisions, apps dont make them) but there is an alpha feature there Taint based Evictions,
to help with better decisions or different ones right?

"*Taint based Evictions (alpha feature)*: A per-pod-configurable eviction behavior when there
are node problems, which is described in the next section." What is wrong with that in this
case, what If I want to limit where something runs on?)

> Kubernetes should support node blacklist
> ----------------------------------------
>                 Key: SPARK-23485
>                 URL:
>             Project: Spark
>          Issue Type: New Feature
>          Components: Kubernetes, Scheduler
>    Affects Versions: 2.3.0
>            Reporter: Imran Rashid
>            Priority: Major
> Spark's BlacklistTracker maintains a list of "bad nodes" which it will not use for running
tasks (eg., because of bad hardware).  When running in yarn, this blacklist is used to avoid
ever allocating resources on blacklisted nodes:
> I'm just beginning to poke around the kubernetes code, so apologies if this is incorrect
-- but I didn't see any references to {{scheduler.nodeBlacklist()}} in {{KubernetesClusterSchedulerBackend}}
so it seems this is missing.  Thought of this while looking at SPARK-19755, a similar issue
on mesos.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message