mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Lyubimov <dlie...@gmail.com>
Subject Re: Negative Preferences in a Recommender
Date Tue, 18 Jun 2013 18:16:33 GMT
Koren, Volinsky: "CF for implicit feedback datasets"


On Tue, Jun 18, 2013 at 8:07 AM, Pat Ferrel <pat@occamsmachete.com> wrote:

> They are on a lot of papers, which are you looking at?
>
> On Jun 17, 2013, at 6:30 PM, Dmitriy Lyubimov <dlieu.7@gmail.com> wrote:
>
> (Kinda doing something very close. )
>
> Koren-Volynsky paper on implicit feedback can be generalized to decompose
> all input into preference (0 or 1) and confidence matrices (which is
> essentually an observation weight matrix).
>
> If you did not get any observations, you encode it as (p=0,c=1) but if you
> know that user did not like item, you can encode that observation with much
> more confidence weight, something like (p=0, c=30) -- actually as high
> confidence as a conversion in your case it seems.
>
> The problem with this is that you end up with quite a bunch of additional
> parameters in your model to figure, i.e. confidence weights for each type
> of action in the system. You can establish that thru extensive
> crossvalidation search, which is initially quite expensive (even for
> distributed machine cluster tech), but could be incrementally bail out much
> sooner after previous good guess is already known.
>
> MR doesn't work well for this though since it requires  A LOT of
> iterations.
>
>
>
> On Mon, Jun 17, 2013 at 5:51 PM, Pat Ferrel <pat.ferrel@gmail.com> wrote:
>
> > In the case where you know a user did not like an item, how should the
> > information be treated in a recommender? Normally for retail
> > recommendations you have an implicit 1 for a purchase and no value
> > otherwise. But what if you knew the user did not like an item? Maybe you
> > have records of "I want my money back for this junk" reactions.
> >
> > You could make a scale, 0, 1 where 0 means a bad rating and 1 a good, no
> > value as usual means no preference? Some of the math here won't work
> though
> > since usually no value implicitly = 0 so maybe -1 = bad, 1 = good, no
> > preference implicitly = 0?
> >
> > Would it be better to treat the bad rating as a 1 and good as 2? This
> > would be more like the old star rating method only we would know where
> the
> > cutoff should be between a good review and bad (1.5)
> >
> > I suppose this could also be treated as another recommender in an
> ensemble
> > where r = r_p - r_h, where r_h = predictions from "I hate this product"
> > preferences?
> >
> > Has anyone found a good method?
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message