Return-Path: X-Original-To: apmail-hama-commits-archive@www.apache.org Delivered-To: apmail-hama-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 988FE10A0F for ; Fri, 14 Jun 2013 03:22:31 +0000 (UTC) Received: (qmail 15219 invoked by uid 500); 14 Jun 2013 03:22:31 -0000 Delivered-To: apmail-hama-commits-archive@hama.apache.org Received: (qmail 14622 invoked by uid 500); 14 Jun 2013 03:22:26 -0000 Mailing-List: contact commits-help@hama.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hama.apache.org Delivered-To: mailing list commits@hama.apache.org Received: (qmail 14568 invoked by uid 500); 14 Jun 2013 03:22:23 -0000 Delivered-To: apmail-incubator-hama-commits@incubator.apache.org Received: (qmail 14565 invoked by uid 99); 14 Jun 2013 03:22:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Jun 2013 03:22:22 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.131] (HELO eos.apache.org) (140.211.11.131) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Jun 2013 03:22:21 +0000 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id 4C140B6E for ; Fri, 14 Jun 2013 03:22:01 +0000 (UTC) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Apache Wiki To: Apache Wiki Date: Fri, 14 Jun 2013 03:22:01 -0000 Message-ID: <20130614032201.75240.3733@eos.apache.org> Subject: =?utf-8?q?=5BHama_Wiki=5D_Update_of_=22MultiLayerPerceptron=22_by_YexiJia?= =?utf-8?q?ng?= Auto-Submitted: auto-generated X-Virus-Checked: Checked by ClamAV on apache.org Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hama Wiki" for chan= ge notification. The "MultiLayerPerceptron" page has been changed by YexiJiang: http://wiki.apache.org/hama/MultiLayerPerceptron?action=3Ddiff&rev1=3D15&re= v2=3D16 Node: This page is always under construction. = =3D=3D What is Multilayer Perceptron? =3D=3D - A [[http://en.wikipedia.org/wiki/Multilayer_perceptron|multilayer percept= ron]] is a kind of feed forward [[http://en.wikipedia.org/wiki/Artificial_n= eural_network|artificial neural network]], which is a mathematic model insp= ired by the biological neural network. + A [[http://en.wikipedia.org/wiki/Multilayer_perceptron|multilayer percept= ron (MLP)]] is a kind of feed forward [[http://en.wikipedia.org/wiki/Artifi= cial_neural_network|artificial neural network]], which is a mathematic mode= l inspired by the biological neural network. The multilayer perceptron can be used for various machine learning tasks = such as classification and regression. = The basic component of a multilayer perceptron is the neuron. = In a multilayer perceptron, the neurons are aligned in layers and in any = two adjacent layers the neurons are connected in pairs with weighted edges. - A practical multilayer perceptron consists of at least three layers of ne= urons, including one input layer, one or more hidden layers, and one output= layer. + A practical multilayer perceptron consists of at least three layers of ne= urons, including one input layer, one or more hidden layers, and one output= layer. = + = + The size of input layer and output layer determines what kind of data a M= LP can accept. + Specifically, the number of neurons in the input layer determines the dim= ensions of the input feature, the number of neurons in the output layer det= ermines the dimension of the output labels. Typically, the two-class classi= fication and regression problem requires the size of output layer to be one= , while the multi-class problem requires the size of output layer equals to= the number of classes. = Here is an example multilayer perceptron with 1 input layer, 1 hidden lay= er and 1 output layer: = @@ -19, +22 @@ = =3D=3D How Multilayer Perceptron works? =3D=3D = - In general, people use the (already prepared) MLP by feed the input featu= re to the input layer and get the result from the output layer. + In general, people use the (already prepared) MLP by feeding the input fe= ature to the input layer and get the result from the output layer. + The results are calculated in a feed-forward approach, from the input lay= er to the output layer. + = = To be added... =20