hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HAMA-681) Multi Layer Perceptron
Date Sun, 02 Jun 2013 22:42:20 GMT

    [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13672705#comment-13672705
] 

Edward J. Yoon commented on HAMA-681:
-------------------------------------

1) Every source files should include the Apache license header. Please see http://www.apache.org/legal/src-headers.html

2) Please use code formatter. In a multi-developer project, team collaboration is essential
for the success of the project. Consistent code formatting improves readability and enhance
teamwork. Here's a eclipse settings file that you can import - http://hama.apache.org/files/hama-eclipse-formatter.xml

To use:

 - Open up Window > Preferences and navigate to the Java > Code Style > Formatter
page
 - Press the import button and select the hama-eclipse-formatter.xml file
 - Then, you can apply the formatter to a selected area of a Java source code files by pressing
'Shift+Ctrl+F'.

Please check out the Hama trunk (SVN) from ASF repository and use SVN if you're going to continue
to make some changes.
                
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>            Reporter: Christian Herta
>            Assignee: Yexi Jiang
>              Labels: patch, perceptron
>         Attachments: perception.patch
>
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to
Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone
MLP Implementation. Then the Hama BSP programming model should be used to realize distributed
learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:
>  Videos:
>     - http://www.youtube.com/watch?v=ZmNOAtZIgIk
>     - http://techtalks.tv/talks/57639/
>  MLP and Deep Learning Tutorial:
>  - http://www.stanford.edu/class/cs294a/
>  Scientific Papers:
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message