hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yexi Jiang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HAMA-681) Multi Layer Perceptron
Date Tue, 14 May 2013 17:37:21 GMT

    [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13657255#comment-13657255
] 

Yexi Jiang commented on HAMA-681:
---------------------------------

Sorry for the late reply and thank you for giving the permission.

For the issue 681, I'm wondering is there any deadline for it? As I can
only working on this issue in my spare time, I may not be able to meet the
deadline if it is in the near future.

Based on my understanding of the issue description, the first step is to
implement the distributed backpropagation neural networks. And to make the
first version compatible with the future demand, we need to make the neural
network flexible and general enough to support different learning strategy
(adaptive subgradient, L-BFGS, or may be gradient descent), topology
(arbitrary number of layers, arbitrary connections between neurons).

Also, the issue also suggests to migrate the code from Mahout (I read the
corresponding code and found the implementation is actually a three-layer
back-propagation neural networks learnt through gradient descent).

So, is it OK that I also implement a three-layer backpropagation neural
network in Hama style for the first step?



2013/5/9 Edward J. Yoon (JIRA) <jira@apache.org>




-- 
------
Yexi Jiang,
ECS 251,  yjian004@cs.fiu.edu
School of Computer and Information Science,
Florida International University
Homepage: http://users.cis.fiu.edu/~yjian004/

                
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to
Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone
MLP Implementation. Then the Hama BSP programming model should be used to realize distributed
learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:
>  Videos:
>     - http://www.youtube.com/watch?v=ZmNOAtZIgIk
>     - http://techtalks.tv/talks/57639/
>  MLP and Deep Learning Tutorial:
>  - http://www.stanford.edu/class/cs294a/
>  Scientific Papers:
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message