mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: Recommender system implementations
Date Fri, 22 Oct 2010 06:13:50 GMT
Actually this isn't the gold standard at all.  Testing on your training data
will give you very misleading
results and many algorithms that do worse on the training data will actually
do much, much better on
new data.  That is the whole point of avoiding over-fitting.

Test on held-out data for both the original and the derived models just like
Sean suggested.  To do
anything else will be misleading at best.

On Thu, Oct 21, 2010 at 9:39 PM, Lance Norskog <goksron@gmail.com> wrote:

> Now, obviously, the gold standard for recommendations is the data in
> the original model. So, I make recommendations from the original, and
> the derived, from the user/item prefs given in the original data. I
> don't really care about what the user gave as preferences: I want to
> know what the recommender algorithm itself thinks. But the
> recommenders just parrot back the data model instead of giving me
> their own opinion. Thus, the point of this whole thread. But how
> recommender algorithms work is a side issue. I'm trying to use them as
> an indirect measurement of something else.
>
> What is another way to test what I'm trying to test? What is another
> way to evaluate the quality of my derivation function?
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message