hama-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hama Wiki] Update of "MultiLayerPerceptron" by YexiJiang
Date Sat, 15 Jun 2013 19:45:09 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hama Wiki" for change notification.

The "MultiLayerPerceptron" page has been changed by YexiJiang:
http://wiki.apache.org/hama/MultiLayerPerceptron?action=diff&rev1=16&rev2=17

  
  In general, people use the (already prepared) MLP by feeding the input feature to the input
layer and get the result from the output layer.
  The results are calculated in a feed-forward approach, from the input layer to the output
layer.
+ 
+ One step of feed-forward is illustrated in the below figure.
+ 
+ {{https://docs.google.com/drawings/d/1hJ2glrKKIWokQOy6RI8iw1T8TmuZFcbaCwnzGoKc8gk/pub?w=586&h=302}}
+ 
+ For each layer except the input layer, the value of the current neuron is calculated by
taking the linear combination of the values output by the neurons of the previous layer, where
the weight determines the contribution of a neuron in the previous layer to current neuron.
Obtaining the linear combination result z, a non-linear squashing function is used to constrain
the output into a restricted range. Typically, sigmoid function or tanh function are used.

+ 
+ For each step of feed-forward, the calculated results are propagated one layer close to
the output layer.
+ 
  
  
  To be added...

Mime
View raw message