Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B3D28200B3E for ; Wed, 7 Sep 2016 14:11:07 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B23DA160AC1; Wed, 7 Sep 2016 12:11:07 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id AB105160AA3 for ; Wed, 7 Sep 2016 14:11:06 +0200 (CEST) Received: (qmail 22008 invoked by uid 500); 7 Sep 2016 12:11:05 -0000 Mailing-List: contact dev-help@horn.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@horn.incubator.apache.org Delivered-To: mailing list dev@horn.incubator.apache.org Received: (qmail 21994 invoked by uid 99); 7 Sep 2016 12:11:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 07 Sep 2016 12:11:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 1B1FDC07EF for ; Wed, 7 Sep 2016 12:11:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.701 X-Spam-Level: X-Spam-Status: No, score=-0.701 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H2=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=barantopal-com.20150623.gappssmtp.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id jQJsgdmHSAAa for ; Wed, 7 Sep 2016 12:11:01 +0000 (UTC) Received: from mail-ua0-f176.google.com (mail-ua0-f176.google.com [209.85.217.176]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 3997F5FD4D for ; Wed, 7 Sep 2016 12:11:01 +0000 (UTC) Received: by mail-ua0-f176.google.com with SMTP id 71so10951206uag.2 for ; Wed, 07 Sep 2016 05:11:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=barantopal-com.20150623.gappssmtp.com; s=20150623; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-transfer-encoding; bh=4U38ndhoXw0neaUUe8ORrwB8BeFxv+bJWPXQYHs/uhc=; b=NTsURobvsogvokNC2d+sPxqKf/GVsw+eQgjQySTm/1OYn3ttCMpjBWvDJJSNeNUZLc STqs54KjrZ76l9dBAt6rHlVrY02h7GnmlT6shARq8p+tDhTAGNWmsv2ZvP4TRYTTwLOS QG4/kvJM/g+lMGyShunIsN29iPCbufIgb5pXvY+ALq1mrIlE/MzQcydVY0+iQMqOtDZ2 D1IlM51PmUDpe7VqEEGMX4G5IgnQfhcc/LcfUoD2vY1tSTH0E3JQXMmiPl8IrxiuBEw1 RFHBnJYN6MVaS/JX8r1tpJn72PqIDzjAvbUAhWCt0l4fKnfAC9SlBCZQy/x91VHWcNTr C7YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-transfer-encoding; bh=4U38ndhoXw0neaUUe8ORrwB8BeFxv+bJWPXQYHs/uhc=; b=ARboCAJ1xBhPFvHS6PTcPill6hHuyuzuOwqWu6iHKGwl67OBCJI+lDfsKW3Z/cfEns oVkXAz0T1qdelU8PQztQqgo5jPi/MtHQ9sgnRmqbBprrNj3S/skDj2mZ4wbQuWkQQ1RV YIu7El9g7CZU4W2yOYKoIPkNW4MTjpxMY6Lb8HBmVlh8oTG7UXP+QX0Ga8S4G9If/OUt +GXghYijaqA8+dXSBjPV2fNQXu6SrI06cPwKWW8bEDWEe4HRt0DFyoaMYtsD67xfCykZ 1aqGgSmt4mu8xGavYlhJ9vu6zA7b/r9VEURwZU+XQTPW7Sxx8bcWj3pjet9ESgdytgzs mUFw== X-Gm-Message-State: AE9vXwOrZDZnWHJSXqupfUdoiQPkLHF97OHrhpfQ4tZME1cOOYJxplu9LpNT/D7eDogELMxM0rUtvY7UbK6efw== X-Received: by 10.176.2.10 with SMTP id 10mr17146290uas.79.1473250255340; Wed, 07 Sep 2016 05:10:55 -0700 (PDT) MIME-Version: 1.0 Received: by 10.31.81.4 with HTTP; Wed, 7 Sep 2016 05:10:54 -0700 (PDT) X-Originating-IP: [89.107.221.228] In-Reply-To: <018c01d20893$3fd564d0$bf802e70$@samsung.com> References: <01bd01d1ff32$764c3ad0$62e4b070$@samsung.com> <017f01d2088f$ca750d60$5f5f2820$@samsung.com> <018c01d20893$3fd564d0$bf802e70$@samsung.com> From: Baran Topal Date: Wed, 7 Sep 2016 14:10:54 +0200 Message-ID: Subject: Re: Use vector instead of Iterable in neuron API To: dev@horn.incubator.apache.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable archived-at: Wed, 07 Sep 2016 12:11:07 -0000 Thanks Edward. I just cloned the repo and created the fork and update my fork. I created the following case, https://issues.apache.org/jira/browse/HORN-31 Please assign me. Br. 2016-09-07 1:06 GMT+02:00 Edward J. Yoon : > If you're using Git, you're probably using pull requests. Here's info abo= ut > pull request: > https://cwiki.apache.org/confluence/display/HORN/How+To+Contribute > > And, please feel free to file this issue on JIRA: > http://issues.apache.org/jira/browse/HORN > > Then, I'll assign to you! > > -- > Best Regards, Edward J. Yoon > > > -----Original Message----- > From: Baran Topal [mailto:barantopal@barantopal.com] > Sent: Wednesday, September 07, 2016 8:02 AM > To: dev@horn.incubator.apache.org > Subject: Re: Use vector instead of Iterable in neuron API > > Hi Edward, > > Many thanks. I will update this as soon as possible. > > Br. > > 7 Eylul 2016 Car?amba tarihinde, Edward J. Yoon > yazd=C4=B1: > >> > directly into FloatVector and get rid of Synapse totally? >> >> Yes, I think that's more clear. >> >> Internally, each task processes neurons' computation on assigned data >> split. >> The LayeredNeuralNetwork.java contains everything, you can find the >> "n.forward(msg);" and "n.backward(msg);" at there. Before calling the >> forward() or backward() method, we can set the weight vector associated >> with >> neuron using setWeightVector() method and set the argument value for the >> forward() method like below: >> >> n.setWeightVector(weight vector e.g., weightMatrix.get(neuronID)); >> n.forward(outputs from previous layer as a FloatVector); >> >> Then, user-side program can be done like below: >> >> forward(FloatVector input) { >> this.getWeightVector(); // returns weight vector associated with itsel= f. >> float vectorSum; >> for(float element : input) { >> vectorSum +=3D element; >> } >> } >> >> If you have any trouble, please don't hesitate ask here. >> >> -- >> Best Regards, Edward J. Yoon >> >> >> -----Original Message----- >> From: Baran Topal [mailto:barantopal@barantopal.com ] >> Sent: Tuesday, September 06, 2016 11:42 PM >> To: dev@horn.incubator.apache.org >> Subject: Re: Use vector instead of Iterable in neuron API >> >> Hi team and Edward; >> >> I have been checking this and got stuck how to convert the list >> structure to DenseFloatVector. Can you help on this? >> >> Let me explain: >> >> I saw that the concrete FloatVector is actually some sort of array >> structure which is not really compatible with Synapse and >> FloatWritables. Is the aim to convert Synapse construction logic >> directly into FloatVector and get rid of Synapse totally? >> >> // >> >> In the original code, we are passing a list with a structure of having >> Synapse and FloatWritables. I can see that a synapse can be >> constructed with a neuron id and 2 float writables. >> >> What I tried; >> >> 1) I added the following function in Neuron.java >> >> public DenseFloatVector getWeightVector() { >> DenseFloatVector dfv =3D new DenseFloatVector(getWeights()); >> return dfv; >> } >> >> 2) I added the following function in NeuronInterface.java >> >> public void forward2(FloatVector messages) throws IOException; >> >> 3) I added the following function in TestNeuron.java >> >> public void forward2(FloatVector messages) throws IOException { >> >> long startTime =3D System.nanoTime(); >> >> float sum =3D messages.dot(this.getWeightVector()); >> this.feedforward(this.squashingFunction.apply(sum)); >> >> long endTime =3D System.nanoTime(); >> >> System.out.println("Execution time for the forward2 function is: >> " + ((endTime - startTime))); >> >> } >> 4) I tried to refactor testProp() in TestNeuron.java with several ways >> but failed unfortunately with runtime error since weight is not having >> a value. >> >> MyNeuron n_ =3D new MyNeuron(); >> >> FloatVector ds =3D new DenseFloatVector(); >> >> Iterator > li =3D x_.iterator(= ); >> // Iterator ie =3D ds.iterate(); >> >> while(li.hasNext()) { >> // ds.add(li.next()); >> // ie. >> Synapse ee =3D li.next(); >> >> ds.set(ee.getSenderID(), ee.getMessage()); >> >> } >> >> float[] ff =3D new float[2]; >> ff[0] =3D 1.0f; >> ff[1] =3D 0.5f; >> >> float[] ffa =3D new float[2]; >> ffa[0] =3D 1.0f; >> ffa[1] =3D 0.4f; >> >> DenseFloatVector dss =3D new DenseFloatVector(ff); >> DenseFloatVector dssa =3D new DenseFloatVector(ffa); >> >> dss.add(dssa); >> >> >> FloatWritable a =3D new FloatWritable(1.0f); >> FloatWritable b =3D new FloatWritable(0.5f); >> >> Synapse s =3D new Synapse(0, a, b); >> >> >> >> // dss.set(1, 0.5f); >> //dss.set(1, 0.4f); >> >> // DenseFloatVector ds =3D new DenseFloatVector(ff); >> >> n_.forward2(dss); //forward2 >> >> >> >> 2016-09-05 0:55 GMT+02:00 Baran Topal > >: >> > Hi; >> > >> > Thanks I am on it. >> > >> > Br. >> > >> > 2016-09-04 4:16 GMT+02:00 Edward J. Yoon > >: >> >> P.S., so, if you want to test more, please see FloatVector and >> >> DenseFloatVector. >> >> >> >> On Sun, Sep 4, 2016 at 11:13 AM, Edward J. Yoon > > >> >> wrote: >> >>> Once we change the iterable input messages to the vector, we can >> >>> change the legacy code like below: >> >>> >> >>> public void forward(FloatVector input) { >> >>> float sum =3D input.dot(this.getWeightVector()); >> >>> this.feedforward(this.squashingFunction.apply(sum)); >> >>> } >> >>> >> >>> >> >>> >> >>> >> >>> On Sat, Sep 3, 2016 at 11:10 PM, Baran Topal < >> barantopal@barantopal.com > >> >>> wrote: >> >>>> Sure. >> >>>> >> >>>> In the attached, TestNeuron.txt, >> >>>> >> >>>> 1) I put // baran as a comment for the added functions. >> >>>> >> >>>> 2) The added functions and created objects have _ as suffix >> >>>> >> >>>> (e.g. backward_) >> >>>> >> >>>> >> >>>> A correction: above test execution time values were via >> >>>> System.nanoTime(). >> >>>> >> >>>> Br. >> >>>> >> >>>> 2016-09-03 14:05 GMT+02:00 Edward J. Yoon > >: >> >>>>> Interesting. Can you share your test code? >> >>>>> >> >>>>> On Sat, Sep 3, 2016 at 2:17 AM, Baran Topal < >> barantopal@barantopal.com > >> >>>>> wrote: >> >>>>>> Hi Edward and team; >> >>>>>> >> >>>>>> I had a brief test by refactoring Iterable to Vector and on >> >>>>>> TestNeuron.java, I can see some improved times. I didn't check fo= r >> >>>>>> other existing test methods but it seems the execution times are >> >>>>>> improving for both forwarding and backwarding. >> >>>>>> >> >>>>>> These values are via System.currentTimeMillis(). >> >>>>>> >> >>>>>> E.g. >> >>>>>> >> >>>>>> >> >>>>>> Execution time for the forward function is: 5722329 >> >>>>>> Execution time for the backward function is: 31825 >> >>>>>> >> >>>>>> Execution time for the refactored forward function is: 72330 >> >>>>>> Execution time for the refactored backward function is: 4665 >> >>>>>> >> >>>>>> Br. >> >>>>>> >> >>>>>> 2016-09-02 2:14 GMT+02:00 Yeonhee Lee > >: >> >>>>>>> Hi Edward, >> >>>>>>> >> >>>>>>> If we don't have that kind of method in the neuron, I guess it's >> >>>>>>> appropriate to put the method to the neuron. >> >>>>>>> That can be one of the distinct features of Horn. >> >>>>>>> >> >>>>>>> Regards, >> >>>>>>> Yeonhee >> >>>>>>> >> >>>>>>> >> >>>>>>> 2016-08-26 9:40 GMT+09:00 Edward J. Yoon > >: >> >>>>>>> >> >>>>>>>> Hi forks, >> >>>>>>>> >> >>>>>>>> Our current neuron API is designed like: >> >>>>>>>> https://github.com/apache/incubator-horn/blob/master/ >> >>>>>>>> README.md#programming-m >> >>>>>>>> odel >> >>>>>>>> >> >>>>>>>> In forward() method, each neuron receives the pairs of the inpu= ts >> x1, >> >>>>>>>> x2, >> >>>>>>>> ... xn from other neurons and weights w1, w2, ... wn like below= : >> >>>>>>>> >> >>>>>>>> public void forward(Iterable messages) throws IOException; >> >>>>>>>> >> >>>>>>>> Instead of this, I suggest that we use just vector like below: >> >>>>>>>> >> >>>>>>>> /** >> >>>>>>>> * @param input vector from other neurons >> >>>>>>>> * / >> >>>>>>>> public void forward(Vector input) throws IOException; >> >>>>>>>> >> >>>>>>>> And, the neuron provides a getWeightVector() method that return= s >> >>>>>>>> weight >> >>>>>>>> vector associated with itself. I think this is more make sense >> than >> >>>>>>>> current >> >>>>>>>> version, and more easy to use GPU in the future. >> >>>>>>>> >> >>>>>>>> What do you think? >> >>>>>>>> >> >>>>>>>> Thanks. >> >>>>>>>> >> >>>>>>>> -- >> >>>>>>>> Best Regards, Edward J. Yoon >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> -- >> >>>>> Best Regards, Edward J. Yoon >> >>> >> >>> >> >>> >> >>> -- >> >>> Best Regards, Edward J. Yoon >> >> >> >> >> >> >> >> -- >> >> Best Regards, Edward J. Yoon >> >> >> >> > >