hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HAMA-681) Multi Layer Perceptron
Date Tue, 04 Jun 2013 22:49:21 GMT

    [ https://issues.apache.org/jira/browse/HAMA-681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13675370#comment-13675370
] 

Edward J. Yoon commented on HAMA-681:
-------------------------------------

I'm still see errors like:

{code}
edward@udanax:~/workspace/hama-trunk$ mvn install -projects ml

....

-------------------------------------------------------------------------------
Test set: org.apache.hama.ml.perception.TestSmallMultiLayerPerceptron
-------------------------------------------------------------------------------
Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 36.445 sec <<< FAILURE!
testWriteReadMLP(org.apache.hama.ml.perception.TestSmallMultiLayerPerceptron)  Time elapsed:
0.003 sec  <<< FAILURE!
java.lang.AssertionError: expected:<SmallMLP> but was:<null>
        at org.junit.Assert.fail(Assert.java:91)
        at org.junit.Assert.failNotEquals(Assert.java:645)
        at org.junit.Assert.assertEquals(Assert.java:126)
        at org.junit.Assert.assertEquals(Assert.java:145)
        at org.apache.hama.ml.perception.TestSmallMultiLayerPerceptron.testWriteReadMLP(TestSmallMultiLayerPerceptron.java:72)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
        at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
        at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
        at org.junit.experimental.ParallelComputer$1$1.call(ParallelComputer.java:46)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:722)
{code}

I would recommend to use the /tmp directory instead of "src/test/resources/perception".
                
> Multi Layer Perceptron 
> -----------------------
>
>                 Key: HAMA-681
>                 URL: https://issues.apache.org/jira/browse/HAMA-681
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>            Reporter: Christian Herta
>            Assignee: Yexi Jiang
>              Labels: patch, perceptron
>         Attachments: HAMA-681.patch, HAMA-681.patch, perception.patch
>
>
> Implementation of a Multilayer Perceptron (Neural Network)
>  - Learning by Backpropagation 
>  - Distributed Learning
> The implementation should be the basis for the long range goals:
>  - more efficent learning (Adagrad, L-BFGS)
>  - High efficient distributed Learning
>  - Autoencoder - Sparse (denoising) Autoencoder
>  - Deep Learning
>  
> ---
> Due to the overhead of Map-Reduce(MR) MR didn't seem to be the best strategy to distribute
the learning of MLPs.
> Therefore the current implementation of the MLP (see MAHOUT-976) should be migrated to
Hama. First all dependencies to Mahout (Matrix-Library) must be removed to get a standalone
MLP Implementation. Then the Hama BSP programming model should be used to realize distributed
learning.
> Different strategies of efficient synchronized weight updates has to be evaluated.
> Resources:
>  Videos:
>     - http://www.youtube.com/watch?v=ZmNOAtZIgIk
>     - http://techtalks.tv/talks/57639/
>  MLP and Deep Learning Tutorial:
>  - http://www.stanford.edu/class/cs294a/
>  Scientific Papers:
>  - Google's "Brain" project: 
> http://research.google.com/archive/large_deep_networks_nips2012.html
>  - Neural Networks and BSP: http://ipdps.cc.gatech.edu/1998/biosp3/bispp4.pdf
>  - http://jmlr.csail.mit.edu/papers/volume11/vincent10a/vincent10a.pdf

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message