hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "info@christianherta.de" <i...@christianherta.de>
Subject Multilayer Perceptron
Date Sat, 17 Nov 2012 11:05:04 GMT
Hello Tommaso, hello All,

thanks for all replies and the discussion. Especially I found the hint to the
work on terrascale deep learning of jeff dean et. al. very interesting.
Deep learning of MLP are based of learning autoencoders by backpropagation, see
e.g. http://www.stanford.edu/class/cs294a/handouts.html
That's exactly where I am interested in.

The basic learning algorithm should be independent of how to scale out the
learning (BSP or asynchronous messaging).
In the current mlp implementation there is a method to present a training
example to the mlp. The result of this method are weight changes of the neuron
weights. With this online learning and (simple) batch learning can be realized.
In my opinion a separation of the learning algorithms and how to scale out is
the basis for developing both independently.

I wondering if there is any matrix/vector functionality in hama. If there is
any, it's also possible to use the hama matrix implementation. The current mlp
implementation uses only matrix-vector multiplication and additions of matrixes.
Storing and serialization of matrices/vectors must be possible, too.
I will make me more familiar with hama to see how the mlp can be integrated into
hama.

> I would say that Christian can create a JIRA, upload a patch and such. Do
> you need help on implementing in it with BSP?
Yes, I will create an Jira issue. As I say above the basic implementation should
be independent of  BSP.  This is the first step. But any support is very helpful
here, too.

Cheers
 Christian



Thomas Jungblut <thomas.jungblut@gmail.com> hat am 16. November 2012 um 07:04
geschrieben:
> >
> > Regarding deep learning that's surely something interesting we should keep
> > an eye on, maybe we can start from Christian's proposal, implement that and
> > maybe move to the DL if/when we have something ready.
>
>
> Yes, I'm prototyping currently. Not sure if it totally fits into Hama right
> now, but messaging is already there so half the work ;)
>
> I would say that Christian can create a JIRA, upload a patch and such. Do
> you need help on implementing in it with BSP?
>
> 2012/11/16 Suraj Menon <surajsmenon@apache.org>
>
> > +1 on Tommaso's suggestion.
> >
> > On Thu, Nov 15, 2012 at 8:25 AM, Tommaso Teofili
> > <tommaso.teofili@gmail.com>wrote:
> >
> > > Hi Christian,
> > >
> > > it's nice to hear back from you on the list :)
> > > The backprop algorithm is something that I'd be very happy to see
> > > implemented here, and also I've spent some time myself on it some months
> > > ago but didn't manage to finalize the implementation so far.
> > >
> > > Your approach sounds reasonable, I don't have read the paper pointed by
> > > Edward (thanks!) yet but it may help us evaluate how to split things.
> > >
> > > Regarding deep learning that's surely something interesting we should
> > keep
> > > an eye on, maybe we can start from Christian's proposal, implement that
> > and
> > > maybe move to the DL if/when we have something ready.
> > >
> > > Thanks and have a nice day,
> > > Tommaso
> > >
> > >
> > > p.s.:
> > > Regarding the matrix library my opinion is that, for starting, we should
> > > try to use something that just works (I don't know Colt so I can't say)
> > in
> > > order to go straight to the algorithm itself but for the mid / long term
> > > I'd also prefer to use an own matrix multiplication / inverse / etc. just
> > > because that would be useful also for other tasks.
> > >
> > >
> > > 2012/11/15 info@christianherta.de <info@christianherta.de>
> > >
> > > > Dear All,
> > > > what do you think about to scale out the learning of Multi Layer
> > > > Perceptrons
> > > > (MLP) with BSPs?
> > > > I heard the talk of Tommaso at the apacheConAt first glance the
> > pramming
> > > > model
> > > > BSP seems to fit better the MapReduce for this purpose.
> > > > The basic idea is to distribute the backprop algorithm is the
> > following:
> > > >
> > > > Distribution of learning can be done by (batch learning):
> > > > 1 Partioning of the data in x chunks
> > > > 2 On each working node: Learning the weight changes (as matrices) in
> > each
> > > > chunk
> > > > 3 Combining the matrixes (weight changes) and simultaneous update of
> > the
> > > > weights
> > > > in each node - back to 2
> > > >
> > > > Maybe this procedure can be done with random parts of the chunks
> > > > (distributed
> > > > quasi online learning).
> > > >
> > > > I wrote the (basic) backprob algorithm of a multi layer preceptron (see
> > > > mahout
> > > > patch https://issues.apache.org/jira/browse/MAHOUT-976). It uses the
> > > > Mahout
> > > > Matrix Library, which is under the hood the Colt Matrix Library from
> > > Cern.
> > > >
> > > > Probably using the Cern Matrix Library would also suitable for Hama.
> > Then
> > > > it
> > > > could be easy to port the MLP to Hama.
> > > >
> > > > What do you think about it?
> > > >
> > > > Thanks for you response.
> > > >
> > > > Cheers
> > > > Christian
> > >
> >
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message