hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Herta (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HAMA-681) Multi Layer Perceptron
Date Fri, 23 Nov 2012 16:28:59 GMT

     [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Christian Herta updated HAMA-681:
---------------------------------

    Description: 
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

The implementation should be the basis for the long range goals:
 - more efficent learning (Adagrad, L-BFGS)
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 
---
Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to Hama.
First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone MLP
Implementation. Then the Hama BSP programming model should be used to realize distributed
learning. 


  was:
Implementation of a Multilayer Perceptron (Neural Network)

 - Learning by Backpropagation 
 - Distributed Learning

Implementation should be the basis for the long range goals: 
 - High efficient distributed Learning
 - Autoencoder - Sparse (denoising) Autoencoder
 - Deep Learning
 


    
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>    Affects Versions: 0.5.0
>            Reporter: Christian Herta
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to
Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone
MLP Implementation. Then the Hama BSP programming model should be used to realize distributed
learning. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message