hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Jungblut <thomas.jungb...@gmail.com>
Subject Re: Multilayer Perceptron
Date Fri, 16 Nov 2012 06:04:25 GMT
>
> Regarding deep learning that's surely something interesting we should keep
> an eye on, maybe we can start from Christian's proposal, implement that and
> maybe move to the DL if/when we have something ready.


Yes, I'm prototyping currently. Not sure if it totally fits into Hama right
now, but messaging is already there so half the work ;)

I would say that Christian can create a JIRA, upload a patch and such. Do
you need help on implementing in it with BSP?

2012/11/16 Suraj Menon <surajsmenon@apache.org>

> +1 on Tommaso's suggestion.
>
> On Thu, Nov 15, 2012 at 8:25 AM, Tommaso Teofili
> <tommaso.teofili@gmail.com>wrote:
>
> > Hi Christian,
> >
> > it's nice to hear back from you on the list :)
> > The backprop algorithm is something that I'd be very happy to see
> > implemented here, and also I've spent some time myself on it some months
> > ago but didn't manage to finalize the implementation so far.
> >
> > Your approach sounds reasonable, I don't have read the paper pointed by
> > Edward (thanks!) yet but it may help us evaluate how to split things.
> >
> > Regarding deep learning that's surely something interesting we should
> keep
> > an eye on, maybe we can start from Christian's proposal, implement that
> and
> > maybe move to the DL if/when we have something ready.
> >
> > Thanks and have a nice day,
> > Tommaso
> >
> >
> > p.s.:
> > Regarding the matrix library my opinion is that, for starting, we should
> > try to use something that just works (I don't know Colt so I can't say)
> in
> > order to go straight to the algorithm itself but for the mid / long term
> > I'd also prefer to use an own matrix multiplication / inverse / etc. just
> > because that would be useful also for other tasks.
> >
> >
> > 2012/11/15 info@christianherta.de <info@christianherta.de>
> >
> > > Dear All,
> > > what do you think about to scale out the learning of Multi Layer
> > > Perceptrons
> > > (MLP) with BSPs?
> > > I heard the talk of Tommaso at the apacheConAt first glance the
> pramming
> > > model
> > > BSP seems to fit better the MapReduce for this purpose.
> > > The basic idea is to distribute the backprop algorithm is the
> following:
> > >
> > > Distribution of learning can be done by (batch learning):
> > > 1 Partioning of the data in x chunks
> > > 2 On each working node: Learning the weight changes (as matrices) in
> each
> > > chunk
> > > 3 Combining the matrixes (weight changes) and simultaneous update of
> the
> > > weights
> > > in each node - back to 2
> > >
> > > Maybe this procedure can be done with random parts of the chunks
> > > (distributed
> > > quasi online learning).
> > >
> > > I wrote the (basic) backprob algorithm of a multi layer preceptron (see
> > > mahout
> > > patch https://issues.apache.org/jira/browse/MAHOUT-976). It uses the
> > > Mahout
> > > Matrix Library, which is under the hood the Colt Matrix Library from
> > Cern.
> > >
> > > Probably using the Cern Matrix Library would also suitable for Hama.
> Then
> > > it
> > > could be easy to port the MLP to Hama.
> > >
> > > What do you think about it?
> > >
> > > Thanks for you response.
> > >
> > > Cheers
> > >  Christian
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message