hama-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hama Wiki] Update of "MultiLayerPerceptron" by YexiJiang
Date Thu, 27 Jun 2013 14:36:57 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hama Wiki" for change notification.

The "MultiLayerPerceptron" page has been changed by YexiJiang:
https://wiki.apache.org/hama/MultiLayerPerceptron?action=diff&rev1=24&rev2=25

  
  {{http://people.apache.org/~yxjiang/downloads/equ2.png}}
  
- For each step of feed-forward, the calculated results are propagated one layer close to
the output layer.
+ For each step of feed-forward, the calculated results are propagated one layer close to
the output layer. When the calculate results are propagated to the output layer, the procedure
of feed-forward finishes and the neurons of the output layer contain the final results. More
details about the feed-forward calculation can be seen at [[http://ufldl.stanford.edu/wiki/index.php/UFLDL_Tutorial|UFLDL
tutorial]] 
  
+ == How Multilayer Perceptron is trained in Hama? ==
  
+ In general, the training data is stored in HDFS and is distributed in multiple machines.
In Hama, the current implementation (0.6.2 and later) allows to train the MLP in parallel.
+ Two kinds of components are involved in the training procedure: the '''''master task'''''
and the '''''groom task'''''. The master task is in charge of merging the model updating information
and sending model updating information to all the groom tasks. The groom tasks is in charge
of calculate the weight updates according to the training data.
  
- To be added...
+ The training procedure is iterative and each iteration consists of two phases: ''update
weights'' and ''merge update''. 
+ In the ''update weights'' phase, each ''groom task'' would first update the local model
according to the received message from the ''master task''. Then they would compute the weight
updates locally with assigned data partitions and finally send the updated weights to the
''master task''.
+ In the ''merge update'' phase, the ''master task'' would update the model according to the
messages received from the ''groom tasks''. Then it would distribute the updated model to
all ''groom tasks''.
+ The two phases will repeat alternatively until the termination condition is met (reach a
specified number of iterations).
  
  
  
  
- == How Multilayer Perceptron is trained in Hama? ==
+ == How to use Multilayer Perceptron in Hama? ==
- To be added...
  
+ MLP can be used for both regression and classification. For both tasks, we need first initialize
the MLP model by specifying the parameters.
  
- == How to use Multilayer Perceptron in Hama? ==
- To be added...
  
  === Two class learning problem ===
  To be added...

Mime
View raw message