mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manuel Blechschmidt (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAHOUT-906) Allow collaborative filtering evaluators to use custom logic in splitting data set
Date Fri, 02 Dec 2011 12:47:40 GMT

    [ https://issues.apache.org/jira/browse/MAHOUT-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13161594#comment-13161594
] 

Manuel Blechschmidt commented on MAHOUT-906:
--------------------------------------------

Actually it would be a good idea to implement time based splitting. Normally we want a recommender
to predict ratings for items that we are going to like in the future and this should be the
evaluation basis for the recommendations.

In an ecommerce scenario you want the recommender to predict the item that you are going to
buy next. Therefore you have to hide the newest items.

The university of hildesheim (Steffen Rendle, Christoph Freudenthaler, Lars Schmidt-Thieme)
wrote a paper in 2010 where they are combining SVD + HMM and are able to outperform a standard
recommender:
http://www.ismll.uni-hildesheim.de/pub/pdfs/RendleFreudenthaler2010-FPMC.pdf
                
> Allow collaborative filtering evaluators to use custom logic in splitting data set
> ----------------------------------------------------------------------------------
>
>                 Key: MAHOUT-906
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-906
>             Project: Mahout
>          Issue Type: Improvement
>          Components: Collaborative Filtering
>    Affects Versions: 0.5
>            Reporter: Anatoliy Kats
>            Priority: Minor
>              Labels: features
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I want to start a discussion about factoring out the logic used in splitting the data
set into training and testing.  Here is how things stand:  There are two independent evaluator
based classes:  AbstractDifferenceRecommenderEvaluator, splits all the preferences randomly
into a training and testing set.  GenericRecommenderIRStatsEvaluator takes one user at a time,
removes their top AT preferences, and counts how many of them the system recommends back.
> I have two use cases that both deal with temporal dynamics.  In one case, there may be
expired items that can be used for building a training model, but not a test model.  In the
other, I may want to simulate the behavior of a real system by building a preference matrix
on days 1-k, and testing on the ratings the user generated on the day k+1.  In this case,
it's not items, but preferences(user, item, rating triplets) which may belong only to the
training set.  Before we discuss appropriate design, are there any other use cases we need
to keep in mind?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message