[ https://issues.apache.org/jira/browse/HAMA681?page=com.atlassian.jira.plugin.system.issuetabpanels:alltabpanel
]
Christian Herta updated HAMA681:

Description:
Implementation of a Multilayer Perceptron (Neural Network)
 Learning by Backpropagation
 Distributed Learning
The implementation should be the basis for the long range goals:
 more efficent learning (Adagrad, LBFGS)
 High efficient distributed Learning
 Autoencoder  Sparse (denoising) Autoencoder
 Deep Learning

Due to the overhead of MapReduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT976) should be migrated to Hama.
First all dependencies to Mahout (MatrixLibrary) must be removed to get a standalone MLP
Implementation. Then the Hama BSP programming model should be used to realize distributed
learning.
Different strategies of efficient synchronized weight updates has to be evaluated.
Resources:
 Google's "Brain" project:
http://research.google.com/archive/large_deep_networks_nips2012.html
 Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 http://www.stanford.edu/class/cs294a/
 http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf
was:
Implementation of a Multilayer Perceptron (Neural Network)
 Learning by Backpropagation
 Distributed Learning
The implementation should be the basis for the long range goals:
 more efficent learning (Adagrad, LBFGS)
 High efficient distributed Learning
 Autoencoder  Sparse (denoising) Autoencoder
 Deep Learning

Due to the overhead of MapReduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT976) should be migrated to Hama.
First all dependencies to Mahout (MatrixLibrary) must be removed to get a standalone MLP
Implementation. Then the Hama BSP programming model should be used to realize distributed
learning.
Different strategies of efficient synchronized weight updates has to be evaluated.
Resources:
 Google's "Brain" project:
http://research.google.com/archive/large_deep_networks_nips2012.html
 Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
 http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf
> Multi Layer Perceptron
> 
>
> Key: HAMA681
> URL: https://issues.apache.org/jira/browse/HAMA681
> Project: Hama
> Issue Type: New Feature
> Components: machine learning
> Affects Versions: 0.5.0
> Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  Learning by Backpropagation
>  Distributed Learning
> The implementation should be the basis for the long range goals:
>  more efficent learning (Adagrad, LBFGS)
>  High efficient distributed Learning
>  Autoencoder  Sparse (denoising) Autoencoder
>  Deep Learning
>
> 
> Due to the overhead of MapReduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT976) should be migrated to
Hama. First all dependencies to Mahout (MatrixLibrary) must be removed to get a standalone
MLP Implementation. Then the Hama BSP programming model should be used to realize distributed
learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:
>  Google's "Brain" project:
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  http://www.stanford.edu/class/cs294a/
>  http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
