hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yexi Jiang (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (HAMA-770) Use a unified model to represent linear regression, logistic regression, MLP, autoencoder, and deepNets
Date Thu, 27 Jun 2013 15:49:20 GMT

     [ https://issues.apache.org/jira/browse/HAMA-770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yexi Jiang reassigned HAMA-770:
-------------------------------

    Assignee: Yexi Jiang
    
> Use a unified model to represent linear regression, logistic regression, MLP, autoencoder,
and deepNets
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HAMA-770
>                 URL: https://issues.apache.org/jira/browse/HAMA-770
>             Project: Hama
>          Issue Type: Improvement
>            Reporter: Yexi Jiang
>            Assignee: Yexi Jiang
>
> In principle, linear regression, logistic regression, MLP, autoencoder, and deepNets
can be represented by a generic neural network model. Using a generic model and making the
concrete models derive it can increase the reusability of the code.
> More concretely: 
> Linear regression is a two level neural network (one input layer and one output layer)
by setting the squashing function as identity function f( x ) = x, and cost function as squared
error.
> Logistic regression is similar to linear regression, except that the squashing function
is set as sigmoid and cost function is set as cross entropy.
> MLP is a neural nets with at least 2 layers of neurons. The squashing function can be
sigmoid, tanh (may be more) and cost function can be cross entropy, squared error (may be
more).
> (sparse) autoencoder can be used for dimensional reduction (nonlinear) and anomaly detection.
Also, it can be used as the building block of deep nets.
> Generally it is a three layer neural networks, where the size of input layer is the same
as output layer, and the size of hidden layer is typically less than that of the input/output
layer. Its cost function is squared error + KL divergence.
> deepNets is used for deep learning, a simple architecture is to stack several autoencoder
together.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message