systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fei Hu (JIRA)" <>
Subject [jira] [Updated] (SYSTEMML-1760) Improve engine robustness of distributed SGD training
Date Fri, 28 Jul 2017 17:17:00 GMT


Fei Hu updated SYSTEMML-1760:
    Attachment: Runtime_Table.png

> Improve engine robustness of distributed SGD training
> -----------------------------------------------------
>                 Key: SYSTEMML-1760
>                 URL:
>             Project: SystemML
>          Issue Type: Improvement
>          Components: Algorithms, Compiler, ParFor
>            Reporter: Mike Dusenberry
>            Assignee: Fei Hu
>         Attachments: Runtime_Table.png
> Currently, we have a mathematical framework in place for training with distributed SGD
in a [distributed MNIST LeNet example |].
 This task aims to push this at scale to determine (1) the current behavior of the engine
(i.e. does the optimizer actually run this in a distributed fashion, and (2) ways to improve
the robustness and performance for this scenario.  The distributed SGD framework from this
example has already been ported into Caffe2DML, and thus improvements made for this task will
directly benefit our efforts towards distributed training of Caffe models (and Keras in the

This message was sent by Atlassian JIRA

View raw message