systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Dusenberry (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SYSTEMML-1760) Improve engine robustness of distributed SGD training
Date Tue, 18 Jul 2017 17:52:00 GMT

    [ https://issues.apache.org/jira/browse/SYSTEMML-1760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16091890#comment-16091890
] 

Mike Dusenberry commented on SYSTEMML-1760:
-------------------------------------------

cc [~niketanpansare] improvements here will directly improve Caffe2DML, since this distributed
SGD algorithm was integrated into Caffe2DML.

> Improve engine robustness of distributed SGD training
> -----------------------------------------------------
>
>                 Key: SYSTEMML-1760
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1760
>             Project: SystemML
>          Issue Type: Improvement
>          Components: Algorithms, Compiler, ParFor
>            Reporter: Mike Dusenberry
>            Assignee: Fei Hu
>
> Currently, we have a mathematical framework in place for training with distributed SGD
in a [distributed MNIST LeNet example | https://github.com/apache/systemml/blob/master/scripts/nn/examples/mnist_lenet_distrib_sgd.dml].
 This task aims to push this at scale to determine (1) the current behavior of the engine
(i.e. does the optimizer actually run this in a distributed fashion, and (2) ways to improve
the robustness and performance for this scenario.  The distributed SGD framework from this
example has already been ported into Caffe2DML, and thus improvements made for this task will
directly benefit our efforts towards distributed training of Caffe models (and Keras in the
future).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message