mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject Strange evaluation results for BookCrossingRecommender
Date Tue, 09 Mar 2010 16:50:30 GMT

When testing the mahout example BookCrossingRecommender with default settings
(GenericUserBasedRecommender, PearsonCorrelationSimilarity,
NearestNUserNeighborhood), I noticed that the result of the evaluation
(AverageAbsoluteDifferenceRecommenderEvaluator) are
changing randomly, from one test to another. I get scores between 2.1 and 4.8.

Considering the size of the input (about 100000 users and 100000 books), I can't
imagine that the randomness in the algorithms can lead to huge evaluation
differences like that.

What do you think?

View raw message