hama-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hama Wiki] Update of "MultiLayerPerceptron" by YexiJiang
Date Sun, 16 Jun 2013 02:10:10 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hama Wiki" for change notification.

The "MultiLayerPerceptron" page has been changed by YexiJiang:
http://wiki.apache.org/hama/MultiLayerPerceptron?action=diff&rev1=18&rev2=19

  Node: This page is always under construction.
  
  == What is Multilayer Perceptron? ==
- A [[http://en.wikipedia.org/wiki/Multilayer_perceptron|multilayer perceptron (MLP)]] is
a kind of  Too feed forward [[http://en.wikipedia.org/wiki/Artificial_neural_network|artificial
neural network]], which is a mathematic model inspired by the biological neural network.
+ A [[http://en.wikipedia.org/wiki/Multilayer_perceptron|multilayer perceptron (MLP)]] is
a kind of  Too feed forward [[http://en.wikipedia.org/wiki/Artificial_neural_network|artificial
neural network]], which is a mathematical model inspired by the biological neural network.
  The multilayer perceptron can be used for various machine learning tasks such as classification
and regression.
  
  The basic component of a multilayer perceptron is the neuron. 
@@ -15, +15 @@

  Specifically, the number of neurons in the input layer determines the dimensions of the
input feature, the number of neurons in the output layer determines the dimension of the output
labels. Typically, the two-class classification and regression problem requires the size of
output layer to be one, while the multi-class problem requires the size of output layer equals
to the number of classes.
  As for hidden layer, the number of neurons is a design issue. If the neurons are too few,
the model will not be able to learn complex decision boundaries. On the contrary, too many
neurons will decrease the generalization of the model. 
  
- Here is an example multilayer perceptron with 1 input layer, 1 hidden layer and 1 output
layer:
+ Here is an example MLP with 1 input layer, 1 hidden layer and 1 output layer:
  
  {{https://docs.google.com/drawings/d/1DCsL5UiT6eqglZDaVS1Ur0uqQyNiXbZDAbDWtiSPWX8/pub?w=813&h=368}}
  
@@ -23, +23 @@

  
  == How Multilayer Perceptron works? ==
  
- In general, people use the (already prepared) MLP by feeding the input feature to the input
layer and get the result from the output layer.
+ In general, people use the (already prepared) MLP by feeding the input features to the input
layer and get the result from the output layer.
  The results are calculated in a feed-forward approach, from the input layer to the output
layer.
  
  One step of feed-forward is illustrated in the below figure.

Mime
View raw message