mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pat Ferrel <...@occamsmachete.com>
Subject Re: Custome metrics in Mahout Recommendation
Date Tue, 03 Mar 2015 01:26:00 GMT
MAP is what I use, not precision. Mean average precision accounts for ranking better and ranking
is usually what you want to optimize. As I said none are in the current iteration of the recommender
code including RMSE, etc. http://mahout.apache.org/users/recommender/intro-cooccurrence-spark.html

What use did you have for offline metrics? No offline metric is as good as A/B testing. Often
things built in to recommenders will lower one metric or another but produce better results
in the form of user engagement or sales. Metrics must be used with great caution and should
never be relied on in a real world situation where user testing is available.

On Mar 2, 2015, at 5:04 PM, Vikas Kumar <kumar093@umn.edu> wrote:

Sorry for the confusion. Yes, I meant the recommender evaluation metric
such as RMSE, Precision, Recall etc which are inbuilt. But, I am planning
(or reusing - let me know if already done) to write the metrics such as
nDCG, Popularity, Avg. Rating, diversity etc.

Thanks
Vikas

On Mon, Mar 2, 2015 at 6:46 PM, Pat Ferrel <pat@occamsmachete.com> wrote:

> Evaluation metric? You mean like the old recommender evaluator? I’d use
> MAP mean average precision, but none are implemented in the new Spark
> recommender code.
> 
> On Mar 2, 2015, at 3:12 PM, Vikas Kumar <kumar093@umn.edu> wrote:
> 
> I am implementing recommendation techniques in Mahout. However, I have a
> requirement for a custom evaluation metrics other than predefined or
> built-in ones. So,
> 
> Q: Can someone please point me to sample custom Evaluator or metric
> implementation in Mahout?
> 
> 
> Thanks
> Vikas
> 
> 


Mime
View raw message