systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Dusenberry (JIRA)" <>
Subject [jira] [Commented] (SYSTEMML-1774) Improve Parfor parallelism for deep learning
Date Mon, 17 Jul 2017 21:23:00 GMT


Mike Dusenberry commented on SYSTEMML-1774:

[~Tenma] Can you include a script with the updated parfor loop that could be used to replicate
this issue, as well as the memory settings?  Additionally, can you include the runtime plan
({{ml.setExplain(True)}} in MLContext) for each setup (local machine, Spark cluster)?

[~mboehm7] Can you please assist in fixing this issue?  I suspect that we have two issues
here: (1) the sizes aren't being propagated to the rand op thus causing as conservative Spark
op to start with, and (2) a lack of a guard against compiling Spark ops within a REMOTE_SPARK
parfor op.

> Improve Parfor parallelism for deep learning
> --------------------------------------------
>                 Key: SYSTEMML-1774
>                 URL:
>             Project: SystemML
>          Issue Type: Improvement
>          Components: Algorithms, Compiler, ParFor
>    Affects Versions: SystemML 1.0
>            Reporter: Fei Hu
>              Labels: deeplearning
> When running the  [distributed MNIST LeNet example |],
each mini-batch could ideally run in parallel without interaction. We try to force {{parfor
(j in 1:parallel_batches)}} at line 137 of {{nn/examples/mnist_lenet_distrib_sgd.dml}} to
be {{parfor (j in 1:parallel_batches, mode=REMOTE_SPARK, opt=CONSTRAINED)}} use {{REMOTE_SPARK}}
mode, but got some errors about {{org.apache.sysml.runtime.DMLRuntimeException: Not supported:
Instructions of type other than CP instructions}} on the local machine and the error {{java.lang.NullPointerException}}
on the Spark cluster. More log information can be found at the following comments. 

This message was sent by Atlassian JIRA

View raw message