systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fei Hu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SYSTEMML-1809) Optimize the performance of the distributed MNIST_LeNet_Sgd model training
Date Wed, 26 Jul 2017 17:10:00 GMT

     [ https://issues.apache.org/jira/browse/SYSTEMML-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Fei Hu updated SYSTEMML-1809:
-----------------------------
    Description: 
For the current version, there are two bottlenecks for the distributed MNIST_LeNet_Sdg model
training: 
          # Data locality: for {{RemoteParForSpark}}, the tasks are parallelized without considering
data locality. It will cause a lot of data shuffling if the volume of the input data size
is large; 
         # Result merge: the current experiments indicate that the result merge part took
more time than model training. 

After the optimization, we need to compare the performance with the distributed Tensorflow.
 

  was:
For the current version, there are two bottleneck for the distributed MNIST_LeNet_Sdg model
training: 
          # Data locality: for {{RemoteParForSpark}}, the tasks are parallelized without considering
data locality. It will cause a lot of data shuffling if the volume of the input data size
is large; 
         # Result merge: the current experiments indicate that the result merge part took
more time than model training. 

After the optimization, we need to compare the performance with the distributed Tensorflow.
 


> Optimize the performance of the distributed MNIST_LeNet_Sgd model training
> --------------------------------------------------------------------------
>
>                 Key: SYSTEMML-1809
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1809
>             Project: SystemML
>          Issue Type: Task
>    Affects Versions: SystemML 1.0
>            Reporter: Fei Hu
>              Labels: RemoteParForSpark, deeplearning
>
> For the current version, there are two bottlenecks for the distributed MNIST_LeNet_Sdg
model training: 
>           # Data locality: for {{RemoteParForSpark}}, the tasks are parallelized without
considering data locality. It will cause a lot of data shuffling if the volume of the input
data size is large; 
>          # Result merge: the current experiments indicate that the result merge part
took more time than model training. 
> After the optimization, we need to compare the performance with the distributed Tensorflow.
 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message