horn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edward.y...@samsung.com>
Subject RE: Implement convolutional neuron function
Date Mon, 25 Apr 2016 07:10:33 GMT
Hi,

I saw your pull request. First of all, you don't need to create CNNHornJob
extends HornJob. Because, HornJob will be used for every neural networks.

And second, ConvNeuron looks totally same with standard neuron. Of course,
standard backprop formula can be used for convolution layer. But, I was
expected to implement unit test like TestNeuron.java. For example, using
hard-coded temporary input features, convolution kernel, and expected
output, we can verify convolution is work correctly.

Once this task is done, TODO things will be clarified e.g., how to set the
number of maps to the convolution layer, how to calculate each neurons'
coordinate position internally.

--
Best Regards, Edward J. Yoon


-----Original Message-----
From: Edward J. Yoon [mailto:edward.yoon@samsung.com]
Sent: Thursday, April 21, 2016 2:34 PM
To: dev@horn.incubator.apache.org
Subject: RE: Implement convolutional neuron function

Yes please.

First of all, please see the MLP example
https://github.com/apache/incubator-horn/blob/master/src/main/java/org/apach
e/horn/examples/MultiLayerPerceptron.java

If you use the eclipse, you can test it with
https://github.com/apache/incubator-horn/blob/master/src/test/java/org/apach
e/horn/examples/NeuralNetworkTest.java
by clicking "Run" button.

Our goal is implement ConvNeuron.class for convolution layer. Then, we can
create the Convolutional neural network like below:

...
cnn.addLayer(150, ReLu.class, ConvNeuron.class); // convolution layer
ccn.addLayer(100, Sigmoid.class, StandardNeuron.class); // fully connected
ccn.addLayer(100, Sigmoid.class, StandardNeuron.class); // fully connected
ccn.outputLayer(10, Sigmoid.class, StandardNeuron.class); // fully connected
...

We use ReLU as the activation function for convolution layers and switch to
Sigmoid for fully connected layers.

When you try to implement ConvNeuron, you don't need to think about fully
working code. You can write only conv neuron like my neuron test unit:
https://github.com/edwardyoon/incubator-horn/blob/master/src/test/java/org/a
pache/horn/trainer/TestNeuron.java

Again, http://cs231n.github.io/convolutional-networks/ will be helpful.
Please
feel free to ask your questions. I'll help you.

Thanks.

--
Best Regards, Edward J. Yoon


-----Original Message-----
From: Zachary Jaffee [mailto:zij@case.edu]
Sent: Thursday, April 21, 2016 2:17 PM
To: Unknown
Subject: Re: Implement convolutional neuron function

I'll close the PR that I had made before and try to get everything to work
within the framework you laid out.

On Thu, Apr 21, 2016 at 12:58 AM, Edward J. Yoon <edward.yoon@samsung.com>
wrote:

> Ping, you there? Let's re-start :-)
>
> --
> Best Regards, Edward J. Yoon
>
>
> -----Original Message-----
> From: Edward J. Yoon [mailto:edward.yoon@samsung.com]
> Sent: Wednesday, January 20, 2016 9:44 AM
> To: dev@horn.incubator.apache.org
> Subject: RE: Implement convolutional neuron function
>
> Message type is flexible. You can create your own message type for
example,
> FeatureVectorMessage.
>
> > where the summation part happens later?
>
> As I mentioned, the similar pseudo code can be found at
> https://cwiki.apache.org/confluence/display/HORN/Programming+APIs
>
> Input message is the feature map as [x1, x2, ..., xN], the convolution
> computation can be represented like:
>
> Upward() {
>   for each [x1, x2, ..., xN] do
>     y += w * x
>
>   propagate(ReLu(y));
> }
>
> Then, we train the filter weights at downward() method. Single neuron is
> equal
> to single element of matrix.
>
> I recommend you to implement only upward and downward method of
convolution
> neuron at the moment.
>
> --
> Best Regards, Edward J. Yoon
>
> -----Original Message-----
> From: Zachary Jaffee [mailto:zij@case.edu]
> Sent: Wednesday, January 20, 2016 5:18 AM
> To: Unknown
> Subject: Re: Implement convolutional neuron function
>
> Ok, so I think I have a clearer idea now, but just to confirm some things,
> the PropMessage in the code you have written represents the user defined
> neuron function? Additionally, does the neuron object continue to store
the
> result as the element-wise multiplication matrix, i.e. as the hadamard
> product result, where the summation part happens later?
>
> On Mon, Jan 18, 2016 at 9:16 PM, Edward J. Yoon <edward.yoon@samsung.com>
> wrote:
>
> > Hi,
> >
> > As described in "Convolution Demo" section of
> > http://cs231n.github.io/convolutional-networks, convolution layer
> > construct
> > output maps by convoluting trainable kernel filter. Animation will be
> very
> > helpful. The initial weights of kernel filter are random like MLP. And,
> > this
> > convolution computation between a feature map and a kernel can be
> > simplified
> > to a vector/matrix multiplication. The batch of multiple images is
> mat-mat
> > multiplication. This is normal way in other projects like Torch.
> >
> > Instead we doing like that, we'll do element-wise multiplication within
> > each
> > neuron object. User define the neuron function then, framework process
> the
> > neurons in parallel. To understand this flow, Pregel system will be
> > helpful.
> >
> > I roughly guess our system can be useful when if a server receive image
> to
> > recognize. Because, GPU-oriented systems are optimized to process batch
> > operations.
> >
> > --
> > Best Regards, Edward J. Yoon
> >
> >
> > -----Original Message-----
> > From: Zachary Jaffee [mailto:zij@case.edu]
> > Sent: Tuesday, January 19, 2016 1:34 PM
> > To: Unknown
> > Subject: Re: Implement convolutional neuron function
> >
> > So I've tried out a few things, but I can't seem to see what you mean
> when
> > you say not to use a dot product. If you have any insights as to what
> this
> > function would look like, I'd be interested in seeing what you are
> thinking
> > of.
> >
> > On Wed, Dec 16, 2015 at 6:29 PM, Edward J. Yoon <edward.yoon@samsung.com
> >
> > wrote:
> >
> > > Thanks.
> > >
> > > You can discuss this issue with me and shubham.
> > >
> > > I personally we need to approach neuron-centric instead of dot product
> of
> > > matrix or tensor, so that we can parallelize computations at neuron
> > level.
> > > In
> > > here, the tricky issue is handling the topology of neurons within
> > > rectangular
> > > grid (graph structure). But, you can ignore this at the moment.
> > >
> > > If you have any questions/opinions, let's discuss together.
> > >
> > > --
> > > Best Regards, Edward J. Yoon
> > >
> > > -----Original Message-----
> > > From: Zachary Jaffee [mailto:zij@case.edu]
> > > Sent: Thursday, December 17, 2015 12:59 AM
> > > To: Unknown
> > > Subject: Re: Implement convolutional neuron function
> > >
> > > I'll take care of this.
> > >
> > > On Wed, Dec 16, 2015 at 2:26 AM, Edward J. Yoon <
> edward.yoon@samsung.com
> > >
> > > wrote:
> > >
> > > > Hi forks,
> > > >
> > > > Does anyone volunteer for HORN-10?
> > > >
> > > > To implement convolutional neuron function, you can refer my
standard
> > > > neuron
> > > > function code here:
> > > >
> > > >
> > >
> >
>
>
https://github.com/edwardyoon/incubator-horn/blob/HORN-7/src/test/java/org/a
> > > > pache/horn/trainer/TestNeuron.java
> > > >
> > > > And basic equation
> > > >
> > > >
> > >
> >
>
>
https://jianfengwang.files.wordpress.com/2015/07/forwardandbackwardpropagati
> > > > onofconvolutionallayer.pdf and animation version can be found at
> > > > http://cs231n.github.io/convolutional-networks/ for forward and
> > backward
> > > > of
> > > > convolutional neuron.
> > > >
> > > > Thanks!
> > > >
> > > > --
> > > > Best Regards, Edward J. Yoon
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > > --
> > > Zach Jaffee
> > > B.S. Computer Science
> > > Case Western Reserve University Class of 2017
> > > Operations Director | WRUW FM 91.1 Cleveland
> > > Secretary | Recruitment Chair | Phi Kappa Theta Fraternity
> > > (917) 881-0646
> > > zjaffee.com
> > > github.com/ZJaffee
> > >
> > >
> > >
> >
> >
> > --
> > Zach Jaffee
> > B.S. Computer Science
> > Case Western Reserve University Class of 2017
> > Operations Director | WRUW FM 91.1 Cleveland
> > Secretary | Recruitment Chair | Phi Kappa Theta Fraternity
> > (917) 881-0646
> > zjaffee.com
> > github.com/ZJaffee
> >
> >
> >
>
>
> --
> Zach Jaffee
> B.S. Computer Science
> Case Western Reserve University Class of 2017
> Operations Director | WRUW FM 91.1 Cleveland
> Secretary | Recruitment Chair | Phi Kappa Theta Fraternity
> (917) 881-0646
> zjaffee.com
> github.com/ZJaffee
>
>
>
>
>


--
Zach Jaffee
B.S. Computer Science
Case Western Reserve University Class of 2017
Operations Director | WRUW FM 91.1 Cleveland
(917) 881-0646
zjaffee.com
linkedin.com/in/zjaffee
github.com/ZJaffee





Mime
View raw message