horn-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Baran Topal <baranto...@barantopal.com>
Subject Re: Use vector instead of Iterable in neuron API
Date Sun, 04 Sep 2016 22:55:38 GMT
Hi;

Thanks I am on it.

Br.

2016-09-04 4:16 GMT+02:00 Edward J. Yoon <edwardyoon@apache.org>:
> P.S., so, if you want to test more, please see FloatVector and
> DenseFloatVector.
>
> On Sun, Sep 4, 2016 at 11:13 AM, Edward J. Yoon <edwardyoon@apache.org> wrote:
>> Once we change the iterable input messages to the vector, we can
>> change the legacy code like below:
>>
>> public void forward(FloatVector input) {
>>   float sum = input.dot(this.getWeightVector());
>>   this.feedforward(this.squashingFunction.apply(sum));
>> }
>>
>>
>>
>>
>> On Sat, Sep 3, 2016 at 11:10 PM, Baran Topal <barantopal@barantopal.com> wrote:
>>> Sure.
>>>
>>> In the attached, TestNeuron.txt,
>>>
>>> 1) I put // baran as a comment for the added functions.
>>>
>>> 2) The added functions and created objects have _ as suffix
>>>
>>> (e.g. backward_)
>>>
>>>
>>> A correction: above test execution time values were via System.nanoTime().
>>>
>>> Br.
>>>
>>> 2016-09-03 14:05 GMT+02:00 Edward J. Yoon <edwardyoon@apache.org>:
>>>> Interesting. Can you share your test code?
>>>>
>>>> On Sat, Sep 3, 2016 at 2:17 AM, Baran Topal <barantopal@barantopal.com>
wrote:
>>>>> Hi Edward and team;
>>>>>
>>>>> I had a brief test by refactoring Iterable to Vector and on
>>>>> TestNeuron.java, I can see some improved times. I didn't check for
>>>>> other existing test methods but it seems the execution times are
>>>>> improving for both forwarding and backwarding.
>>>>>
>>>>> These values are via System.currentTimeMillis().
>>>>>
>>>>> E.g.
>>>>>
>>>>>
>>>>> Execution time for the forward function is: 5722329
>>>>> Execution time for the backward function is: 31825
>>>>>
>>>>> Execution time for the refactored forward function is: 72330
>>>>> Execution time for the refactored backward function is: 4665
>>>>>
>>>>> Br.
>>>>>
>>>>> 2016-09-02 2:14 GMT+02:00 Yeonhee Lee <ssallys0130@gmail.com>:
>>>>>> Hi Edward,
>>>>>>
>>>>>> If we don't have that kind of method in the neuron, I guess it's
>>>>>> appropriate to put the method to the neuron.
>>>>>> That can be one of the distinct features of Horn.
>>>>>>
>>>>>> Regards,
>>>>>> Yeonhee
>>>>>>
>>>>>>
>>>>>> 2016-08-26 9:40 GMT+09:00 Edward J. Yoon <edward.yoon@samsung.com>:
>>>>>>
>>>>>>> Hi forks,
>>>>>>>
>>>>>>> Our current neuron API is designed like:
>>>>>>> https://github.com/apache/incubator-horn/blob/master/
>>>>>>> README.md#programming-m
>>>>>>> odel
>>>>>>>
>>>>>>> In forward() method, each neuron receives the pairs of the inputs
x1, x2,
>>>>>>> ... xn from other neurons and weights w1, w2, ... wn like below:
>>>>>>>
>>>>>>>   public void forward(Iterable<M> messages) throws IOException;
>>>>>>>
>>>>>>> Instead of this, I suggest that we use just vector like below:
>>>>>>>
>>>>>>>   /**
>>>>>>>    * @param input vector from other neurons
>>>>>>>    * /
>>>>>>>   public void forward(Vector input) throws IOException;
>>>>>>>
>>>>>>> And, the neuron provides a getWeightVector() method that returns
weight
>>>>>>> vector associated with itself. I think this is more make sense
than current
>>>>>>> version, and more easy to use GPU in the future.
>>>>>>>
>>>>>>> What do you think?
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards, Edward J. Yoon
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards, Edward J. Yoon
>>
>>
>>
>> --
>> Best Regards, Edward J. Yoon
>
>
>
> --
> Best Regards, Edward J. Yoon

Mime
View raw message