hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alpha Bagus Sunggono <bagusa...@gmail.com>
Subject Re: Neural Network in hadoop
Date Thu, 12 Feb 2015 10:43:46 GMT
In my opinion,
- This is just for 1 iteration. Then, batch gradient means find all delta,
then updates all weight. So , I think its improperly if each have weight
updated. Weight updated should be after Reduced.
- Backpropagation can be found after Reduced.
- This iteration should be repeat and repeat again. Termination condition
should be measured by delta error of sigmoid output in the end of mapper.
Iteration process can be terminated after we get suitable  small value
enough of the delta error.


On Thu, Feb 12, 2015 at 5:14 PM, unmesha sreeveni <unmeshabiju@gmail.com>
wrote:

> I am trying to implement Neural Network in MapReduce. Apache mahout is
> reffering this paper
> <http://www.cs.stanford.edu/people/ang/papers/nips06-mapreducemulticore.pdf>
>
> Neural Network (NN) We focus on backpropagation By defining a network
> structure (we use a three layer network with two output neurons classifying
> the data into two categories), each mapper propagates its set of data
> through the network. For each training example, the error is back
> propagated to calculate the partial gradient for each of the weights in the
> network. The reducer then sums the partial gradient from each mapper and
> does a batch gradient descent to update the weights of the network.
>
> Here <http://homepages.gold.ac.uk/nikolaev/311sperc.htm> is the worked
> out example for gradient descent algorithm.
>
> Gradient Descent Learning Algorithm for Sigmoidal Perceptrons
> <http://pastebin.com/6gAQv5vb>
>
>    1. Which is the better way to parallize neural network algorithm While
>    looking in MapReduce perspective? In mapper: Each Record owns a partial
>    weight(from above example: w0,w1,w2),I doubt if w0 is bias. A random weight
>    will be assigned initially and initial record calculates the output(o) and
>    weight get updated , second record also find the output and deltaW is got
>    updated with the previous deltaW value. While coming into reducer the sum
>    of gradient is calculated. ie if we have 3 mappers,we will be able to get 3
>    w0,w1,w2.These are summed and using batch gradient descent we will be
>    updating the weights of the network.
>    2. In the above method how can we ensure that which previous weight is
>    taken while considering more than 1 map task.Each map task has its own
>    weight updated.How can it be accurate? [image: enter image description
>    here]
>    3. Where can I find backward propogation in the above mentioned
>    gradient descent neural network algorithm?Or is it fine with this
>    implementation?
>    4. what is the termination condition mensioned in the algorithm?
>
> Please help me with some pointers.
>
> Thanks in advance.
>
> --
> *Thanks & Regards *
>
>
> *Unmesha Sreeveni U.B*
> *Hadoop, Bigdata Developer*
> *Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
> http://www.unmeshasreeveni.blogspot.in/
>
>
>


-- 
Alpha Bagus Sunggono
http://www.dyavacs.com

Mime
View raw message