systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Dusenberry (JIRA)" <>
Subject [jira] [Commented] (SYSTEMML-1760) Improve engine robustness of distributed SGD training
Date Thu, 03 Aug 2017 17:00:09 GMT


Mike Dusenberry commented on SYSTEMML-1760:

[~Tenma] Awesome!  That is a great amount of speedup.  Now that we've identified that the
parfor optimizer is not choosing the optimal plan for this type scenario, we can use these
experiments to make improvements so that a naive usage of the parfor yields a plan with the
same performance (or better!).

> Improve engine robustness of distributed SGD training
> -----------------------------------------------------
>                 Key: SYSTEMML-1760
>                 URL:
>             Project: SystemML
>          Issue Type: Improvement
>          Components: Algorithms, Compiler, ParFor
>            Reporter: Mike Dusenberry
>            Assignee: Fei Hu
>         Attachments: Runtime_Table.png
> Currently, we have a mathematical framework in place for training with distributed SGD
in a [distributed MNIST LeNet example |].
 This task aims to push this at scale to determine (1) the current behavior of the engine
(i.e. does the optimizer actually run this in a distributed fashion, and (2) ways to improve
the robustness and performance for this scenario.  The distributed SGD framework from this
example has already been ported into Caffe2DML, and thus improvements made for this task will
directly benefit our efforts towards distributed training of Caffe models (and Keras in the

This message was sent by Atlassian JIRA

View raw message