hama-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hama Wiki] Update of "MultiLayerPerceptron" by YexiJiang
Date Thu, 27 Jun 2013 14:59:50 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hama Wiki" for change notification.

The "MultiLayerPerceptron" page has been changed by YexiJiang:
https://wiki.apache.org/hama/MultiLayerPerceptron?action=diff&rev1=25&rev2=26

  ## page was renamed from MultipleLayerPerceptron
  Node: This page is always under construction.
+ 
+ <<TableOfContents(5)>>
  
  == What is Multilayer Perceptron? ==
  A [[http://en.wikipedia.org/wiki/Multilayer_perceptron|multilayer perceptron (MLP)]] is
a kind of  Too feed forward [[http://en.wikipedia.org/wiki/Artificial_neural_network|artificial
neural network]], which is a mathematical model inspired by the biological neural network.
@@ -51, +53 @@

  
  == How to use Multilayer Perceptron in Hama? ==
  
- MLP can be used for both regression and classification. For both tasks, we need first initialize
the MLP model by specifying the parameters.
+ MLP can be used for both regression and classification. For both tasks, we need first initialize
the MLP model by specifying the parameters. The parameters are listed as follows:
  
+ ||<rowbgcolor="#DDDDDD"> Parameter || Description ||
+ ||model path || The path to specify the location to store the model. ||
+ ||learningRate || Control the aggressive of learning. A big learning rate can accelerate
the training speed,<<BR>> but may also cause oscillation. Typically in range (0,
1). ||
+ ||regularization || Control the complexity of the model. A large regularization value can
make the weights between<<BR>> neurons to be small, and increase the generalization
of MLP, but it may reduce the model precision. <<BR>> Typically in range (0, 0.1).
||
+ ||momentum || Control the speed of training. A big momentum can accelerate the training
speed, but it may<<BR>> also mislead the model update. Typically in range [0.5,
1) ||
+ ||squashing function || Activate function used by MLP. Candidate squashing function: ''sigmoid'',
''tanh''. ||
+ ||cost function || Evaluate the error made during training. Candidate cost function: ''squared
error'', ''cross entropy (logistic)''. ||
+ ||layer size array || An array specify the number of neurons (exclude bias neurons) in each
layer (include input and output layer). <<BR>> ||
+ 
+ The following is the sample code regarding model initialization.
+ {{{
+     String modelPath = "/tmp/xorModel-training-by-xor.data";
+     double learningRate = 0.6;
+     double regularization = 0.02; // no regularization
+     double momentum = 0.3; // no momentum
+     String squashingFunctionName = "Tanh";
+     String costFunctionName = "SquaredError";
+     int[] layerSizeArray = new int[] { 2, 5, 1 };
+     SmallMultiLayerPerceptron mlp = new SmallMultiLayerPerceptron(learningRate,
+         regularization, momentum, squashingFunctionName, costFunctionName,
+         layerSizeArray);
+ }}}
  
  === Two class learning problem ===
  To be added...

Mime
View raw message