hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From unmesha sreeveni <unmeshab...@gmail.com>
Subject Re: Neural Network in hadoop
Date Fri, 13 Feb 2015 06:10:29 GMT
On Thu, Feb 12, 2015 at 4:13 PM, Alpha Bagus Sunggono <bagusalfa@gmail.com>

> In my opinion,
> - This is just for 1 iteration. Then, batch gradient means find all delta,
> then updates all weight. So , I think its improperly if each have weight
> updated. Weight updated should be after Reduced.
> - Backpropagation can be found after Reduced.
> - This iteration should be repeat and repeat again.
​I doubt if iteration is for each record. ie say for example we have just 5
records,so whether the iteration will be 5 ? or some other concepts.
ie from the above example
*2​ ​*
will be the delta error

​.So here lets say we have a threshold value
​. so for each record we will be checking if
*2​ * is
​ or equal to ​

​threshold value , else continue the iteration. Is it like that . Am I
wrong ?

​Sorry I am not that much clear on the iteration part.​

> Termination condition should be measured by delta error of sigmoid output
> in the end of mapper.
> ​
> Iteration process can be terminated after we get suitable  small value
> enough of the delta error.

Is there any criteria in updating delta weights?
 after calculating output of perceptron lets find the error:
check if error is less than threshold,then delta weight is not updated else
update delta weight .
Is it like that?

> On Thu, Feb 12, 2015 at 5:14 PM, unmesha sreeveni <unmeshabiju@gmail.com>
> wrote:
>> I am trying to implement Neural Network in MapReduce. Apache mahout is
>> reffering this paper
>> <http://www.cs.stanford.edu/people/ang/papers/nips06-mapreducemulticore.pdf>
>> Neural Network (NN) We focus on backpropagation By defining a network
>> structure (we use a three layer network with two output neurons classifying
>> the data into two categories), each mapper propagates its set of data
>> through the network. For each training example, the error is back
>> propagated to calculate the partial gradient for each of the weights in the
>> network. The reducer then sums the partial gradient from each mapper and
>> does a batch gradient descent to update the weights of the network.
>> Here <http://homepages.gold.ac.uk/nikolaev/311sperc.htm> is the worked
>> out example for gradient descent algorithm.
>> Gradient Descent Learning Algorithm for Sigmoidal Perceptrons
>> <http://pastebin.com/6gAQv5vb>
>>    1. Which is the better way to parallize neural network algorithm
>>    While looking in MapReduce perspective? In mapper: Each Record owns a
>>    partial weight(from above example: w0,w1,w2),I doubt if w0 is bias. A
>>    random weight will be assigned initially and initial record calculates the
>>    output(o) and weight get updated , second record also find the output and
>>    deltaW is got updated with the previous deltaW value. While coming into
>>    reducer the sum of gradient is calculated. ie if we have 3 mappers,we will
>>    be able to get 3 w0,w1,w2.These are summed and using batch gradient descent
>>    we will be updating the weights of the network.
>>    2. In the above method how can we ensure that which previous weight
>>    is taken while considering more than 1 map task.Each map task has its own
>>    weight updated.How can it be accurate? [image: enter image
>>    description here]
>>    3. Where can I find backward propogation in the above mentioned
>>    gradient descent neural network algorithm?Or is it fine with this
>>    implementation?
>>    4. what is the termination condition mensioned in the algorithm?
>> Please help me with some pointers.
>> Thanks in advance.
>> --
>> *Thanks & Regards *
>> *Unmesha Sreeveni U.B*
>> *Hadoop, Bigdata Developer*
>> *Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
>> http://www.unmeshasreeveni.blogspot.in/
> --
> Alpha Bagus Sunggono
> http://www.dyavacs.com

*Thanks & Regards *

*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*

View raw message