spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Saisai Shao (JIRA)" <>
Subject [jira] [Commented] (SPARK-24615) Accelerator-aware task scheduling for Spark
Date Mon, 23 Jul 2018 03:00:00 GMT


Saisai Shao commented on SPARK-24615:

Hi [~tgraves] what you mentioned above is also what we think about and try to figure out a
way to solve it. (this problem also existed in barrier execution).

>From user point, specifying resource through RDD is the only feasible way currently what
I can think, though resource is bound to stage/task not particular RDD. This means user could
specify resources for different RDDs in a single stage, Spark can only use one resource within
this stage. This will bring out several problems as you mentioned:

*Specify resources to which RDD*

For example {{rddA.withResource.mapPartition \{ xxx \}.collect()}} is not different from {{rddA.mapPartition
\{ xxx \}.withResource.collect}}. Since all the rdds are executed in the same stage. So in
the current design, not matter the resource is specified with {{rddA}} or mapped RDD, the
result is the same.

*one to one dependency RDDs with different resources*

For example {{rddA.withResource.mapPartition \{ xxx \}.withResource.collec()}}, here assuming
the resource request for {{rddA}} and mapped RDD is different, since they're running in a
single stage, so we should fix such conflict.

*multiple dependencies RDDs with different resources*

For example:

val rddA = rdd.withResources.mapPartitions()

val rddB = rdd.withResources.mapPartitions()

val rddC = rddA.join(rddB)

If the resources in {{rddA}} is different from {{rddB}}, then we should also fix such conflicts.

Previously I proposed to use largest resource requirement to satisfy all the needs. But it
may also cause the resource wasting, [~mengxr] mentioned to set/merge resources per partition
to avoid waste. In the meanwhile, it there's a API exposed to set resources in the stage level,
then this problem will not be existed, but Spark doesn't expose such APIs to user, the only
thing user can specify is from RDD level, I'm still thinking of a good way to fix it.

> Accelerator-aware task scheduling for Spark
> -------------------------------------------
>                 Key: SPARK-24615
>                 URL:
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Saisai Shao
>            Assignee: Saisai Shao
>            Priority: Major
>              Labels: Hydrogen, SPIP
> In the machine learning area, accelerator card (GPU, FPGA, TPU) is predominant compared
to CPUs. To make the current Spark architecture to work with accelerator cards, Spark itself
should understand the existence of accelerators and know how to schedule task onto the executors
where accelerators are equipped.
> Current Spark’s scheduler schedules tasks based on the locality of the data plus the
available of CPUs. This will introduce some problems when scheduling tasks with accelerators
>  # CPU cores are usually more than accelerators on one node, using CPU cores to schedule
accelerator required tasks will introduce the mismatch.
>  # In one cluster, we always assume that CPU is equipped in each node, but this is not
true of accelerator cards.
>  # The existence of heterogeneous tasks (accelerator required or not) requires scheduler
to schedule tasks with a smart way.
> So here propose to improve the current scheduler to support heterogeneous tasks (accelerator
requires or not). This can be part of the work of Project hydrogen.
> Details is attached in google doc. It doesn't cover all the implementation details, just
highlight the parts should be changed.
> CC [~yanboliang] [~merlintang]

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message