mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reto Matter <reto.mat...@gmail.com>
Subject Re: What about implementing ELM?
Date Tue, 30 Apr 2013 14:26:50 GMT
Hmm, this sounds like a cool idea....


On Tue, Apr 30, 2013 at 4:11 PM, Sean Owen <srowen@gmail.com> wrote:

> I've just skimmed it and so probably missed some key details, but this
> looks like a hidden layer model where you just randomly pick values
> for the hidden layer parameters, and then solve a simple linear
> regression model to predict outputs from the randomized hidden layer.
> The random values are never tuned or learned. It sounds too good to be
> true at first, and the test results show it does worse on regression
> tasks (?) but it gets close and is simple.
>
> Maybe you could think of it as an ensemble type of approach. You make
> a bunch of random projections of the input, each of which is then used
> to solve a different regression problem for the same output. Those
> answers are combined via weighted that you learn with one step.
>
> On Tue, Apr 30, 2013 at 2:20 PM, Reto Matter <reto.matter@gmail.com>
> wrote:
> > As far as I understand ELMs, the main difference is that learning in that
> > particular setting comes down to 3 relatively simple steps and in fact no
> > iteration as in other learning algos (e.g. Backpropagation) is needed.
> So,
> > in that respect, the learning phase is blazingly fast compared to other
> > approaches.
> > I don't think they are any better in terms of generalization
> capabilities,
> > but I haven't studied the theory behind ELMs good enough to really be
> > sure...
> >
> > greets,
> > reto
> >
> >
> > On Tue, Apr 30, 2013 at 2:45 PM, Louis Hénault <louis.henault@level5.fr
> >wrote:
> >
> >> I am not at home where I have my courses note about it, but you can
> have a
> >> look here for example:
> >> http://msrvideo.vo.msecnd.net/rmcvideos/144113/dl/144113.pdf
> >> page 50 you have a comparison between SVM and ELM, and ELM outperform
> SVM
> >> for the testing and training times.
> >>
> >> It is not easy to give theoretical reasons why ELM are so quick
> compared to
> >> SVM, but they are.
> >>
> >> If someone seems to be interested to work on it with me, just tell me.
> >>
> >>
> >>
> >> 2013/4/30 Sean Owen <srowen@gmail.com>
> >>
> >> > If you care to work on it, you should work on it. Implementations
> >> > exist or don't exist because someone created it, or nobody was
> >> > interested in creating it.
> >> >
> >> > I have never heard of 'extreme learning' and found this summary:
> >> >
> >> >
> >>
> http://www.slideshare.net/formatc666/extreme-learning-machinetheory-and-applications
> >> >
> >> > If it's accurate, this is just describing a single hidden layer model
> >> > trained with back propagation. I don't see what's new? the part about
> >> > learning the beta weights is simple linear algebra.
> >> >
> >> > If it's just a hidden layer model, it's not necessarily better than
> SVMs,
> >> > no.
> >> >
> >> > On Tue, Apr 30, 2013 at 11:05 AM, Louis Hénault <
> louis.henault@level5.fr
> >> >
> >> > wrote:
> >> > > Hi everybody,
> >> > >
> >> > > Many people are trying to integrate SVM to Mahout. I can understand
> >> since
> >> > > SVM are really efficient in a "small data" context.
> >> > > But, as you may know, SVM has:
> >> > > -a slow learning speed
> >> > > -a poor learning scalability
> >> > >
> >> > > In contrast, ELM give results which are usually at least as good as
> >> SVM's
> >> > > and are something like 1000x faster.
> >> > > So, why not trying to work on this topic?
> >> > >
> >> > > (Sorry if someone already talked about it, I'm new on this mailing
> and
> >> > did
> >> > > not find anything after some researches)
> >> > >
> >> > > Regards
> >> >
> >>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message