mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <sro...@gmail.com>
Subject Re: Average Absolute Difference Recommender Evaluator metric
Date Tue, 25 Oct 2011 23:58:44 GMT
These values don't have some well known absolute meaning. 1.2 might or might
not be a good average error. Most obviously it depends on the scale of the
input. On a scale of ratings from 0 to 3 that's a big error but not on a
scale of 0 to 10.

At least, both AAD and RMSE don't vary if you shift all ratings by some
amount and they scale proportionally if you scale the values. So you could
compare values obtained on data over different scales by normalizing or
talking about it as a percentage of the range - like saying estimates are
off by 12% on average, in the case of a 0 to 10 scale.

How can you use AAD or RMSE? Lower is better. Use the settings or algorithm
that minimizes this.

But to be clear this value will vary for the same algorithm on different
input. Its not a property of the algorithm but of algorithm and data.
Normally you'd fix the data as some representative sample of real data. Then
just the algorithm varies and you can freely compare across them without
thinking of normalizing.

This is fairly different from the question of boolean data and recommenders.
You can't get a real AAD in this case so no comparison is possible that way.
You can only fall back to precision recall, or AUC as Ted says. These are
quite different and more abstract measures.

On Oct 26, 2011 12:13 AM, "lee carroll" <lee.a.carroll@googlemail.com>
wrote:

> >No, you're welcome to make comparisons in these tables. It's valid.
>
> Okay I think I'm back at square one.
> So we have an AAD using an Euclidean similarity measure of 1.2 This is
> calculated for ratings in the range of 1 through to 10.
> For the same data we also have a Tanimoto AAD of 1.3
>
> Now imagine the ratings are now in the range of 1 through to 20 but
> all the users rate in exactly the same way (rating value)*2
> We would now have for the Euclidean driven recommender an AAD of 2.4
> but the tanimoto would still be 1.3
>
> How can we use AAD to compare the two recommenders ?
>
> A bit of background just to explain why I'm labouring this point (and
> I'm well aware that I'm labouring it)
> By being able to describe AAD as "the amount a prediction would differ
> from the actual rating. (Lower the better)"
> to a business stake holder makes the evaluation of the recommender
> vivid and concrete. The confidence this
> creates is not to be under-estimated. However how do I describe to a
> business stake holder the meaning of a tanimoto produced
> AAD? I can't at the moment :-)
>
> cheers Lee C
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message