spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Norman He (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-16574) Distribute computing to each node based on certain hints
Date Fri, 15 Jul 2016 20:26:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-16574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15380046#comment-15380046
] 

Norman He commented on SPARK-16574:
-----------------------------------

worker rdd is 40 tuples. They are equivalent. no data locality should come into play here.

> Distribute computing to each node based on certain hints
> --------------------------------------------------------
>
>                 Key: SPARK-16574
>                 URL: https://issues.apache.org/jira/browse/SPARK-16574
>             Project: Spark
>          Issue Type: Wish
>            Reporter: Norman He
>
> 1) I have gpuWorkers RDD like(each node have 2 gpus)
>     val nodes= 10
>     val gpuCount = 2
>     val cross: Array[(Int, Int)] = for( x <- Array.range(0, nodes);  y <-     
Array.range(0, gpuCount ) ) yield (x, y)
>     var gpuWorkers: RDD[(Int, Int)] = sc.parallelize(cross, nodes * gpuCount)
> 2) when executor runs, I would somehow like to distribute code to each nodes based on
cross's gpu index(y) so that each machine 2 gpu can be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message