spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "WangJianfei (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-6567) Large linear model parallelism via a join and reduceByKey
Date Mon, 12 Sep 2016 06:57:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-6567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15483287#comment-15483287
] 

WangJianfei commented on SPARK-6567:
------------------------------------

@Reza Zadeh Any progress about this problems? 
Thanks.
codlife

> Large linear model parallelism via a join and reduceByKey
> ---------------------------------------------------------
>
>                 Key: SPARK-6567
>                 URL: https://issues.apache.org/jira/browse/SPARK-6567
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, MLlib
>            Reporter: Reza Zadeh
>         Attachments: model-parallelism.pptx
>
>
> To train a linear model, each training point in the training set needs its dot product
computed against the model, per iteration. If the model is large (too large to fit in memory
on a single machine) then SPARK-4590 proposes using parameter server.
> There is an easier way to achieve this without parameter servers. In particular, if the
data is held as a BlockMatrix and the model as an RDD, then each block can be joined with
the relevant part of the model, followed by a reduceByKey to compute the dot products.
> This obviates the need for a parameter server, at least for linear models. However, it's
unclear how it compares performance-wise to parameter servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message