spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kay Ousterhout (JIRA)" <>
Subject [jira] [Updated] (SPARK-13279) Spark driver is very slow (due to N^2 check) when there are 200k tasks submitted in a stage
Date Thu, 11 Feb 2016 19:21:18 GMT


Kay Ousterhout updated SPARK-13279:
    Component/s: Scheduler

> Spark driver is very slow (due to N^2 check) when there are 200k tasks submitted in a
> -------------------------------------------------------------------------------------------
>                 Key: SPARK-13279
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler, Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Sital Kedia
>            Priority: Minor
> For each task that the TaskSetManager adds, it iterates through the entire list of existing
tasks to check if it's there.  As a result, scheduling a new task set is O(N^2), which can
be slow for large task sets.
> This is a bug that was introduced by
that commit removed the "!readding" condition from the if-statement, but since the re-adding
parameter defaulted to false, that commit should have removed the condition check in the if-statement
> -------------------------------------
> We discovered this bug while running a large pipeline with 200k tasks, when we found
that the executors were not able to register with the driver because the driver was stuck
holding a global lock in TaskSchedulerImpl.submitTasks function for a long time (it wasn't
deadlocked -- just taking a long time). 
> jstack of the driver -
> executor log -
> From the jstack I see that the thread handing the resource offer from executors (dispatcher-event-loop-9)
is blocked on a lock held by the thread "dag-scheduler-event-loop", which is iterating over
an entire ArrayBuffer when adding a pending tasks. So when we have 200k pending tasks, because
of this o(n2) operations, the driver is just hung for more than 5 minutes. 
> Solution -   In addPendingTask function, we don't really need a duplicate check. It's
okay if we add a task to the same queue twice because dequeueTaskFromList will skip already-running
> Please note that this is a regression from Spark 1.5.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message