mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <sro...@gmail.com>
Subject Re: Understanding the SVD recommender
Date Sun, 06 Nov 2011 17:52:21 GMT
Following up on this very old thread.

I understood all this bit, thanks, that greatly clarified.

You multiply a new user vector by V to project it into the new
"pseudo-item", reduced-dimension space.
And to get back to real items, you multiply by V's inverse, which is
its transpose.
And so you are really multiplying the user vector by V VT, which is
not a no-op, since those are truncated matrices and aren't actually
exact inverses (?)

The original paper talks about cosine similarities between users or
items in the reduced-dimension space, but, can anyone shine light on
the point of that? From the paper also, it seems like they say the
predictions are just computed as vector products as above.


Finally, separately, I'm trying to understand the Lanczos method as
part of computing an SVD. Lanczos operates on a real symmetric matrix
right? And am I right that it comes into play when you are computing
and SVD...

A = U * S * VT

... because U is actually the eigenvectors of (symmetric) A*AT and V
is the eigenvectors of AT*A? And so Lanczos is used to answer those
questions to complete the SVD?


On Fri, Jun 4, 2010 at 6:48 AM, Ted Dunning <ted.dunning@gmail.com> wrote:
> You are correct.  The paper has an appalling treatment of the folding in
> approach.
>
> In fact, the procedure is dead simple.
>
> The basic idea is to leave the coordinate system derived in the original SVD
> intact and simply project the new users into that space.
>
> The easiest way to see what is happening is to start again with the original
> rating matrix A as decomposed:
>
>   A = U S V'
>
> where A is users x items.  If we multiply on the right by V, we get
>
>   A V = U S V' V = U S
>
> (because V' V = I, by definition).  This result is (users x items) x (items
> x k) = users x k, that is, it gives a k dimensional vector for each user.
>  Similarly, multiplication on the left by U' gives a k x items matrix which,
> when transposed gives a k dimensional vector for each item.
>
> This implies that if we augment U with new user row vectors U_new, we should
> be able to simply compute new k-dimensional vectors for the new users and
> adjoin these new vectors to the previous vectors.  Concisely put,
>
> ( A     )     ( A V     )
> (       ) V = (         )
> ( A_new )     ( A_new V )
>
> This isn't really magical.  It just says that we can compute new user
> vectors at any time by multiplying the new users' ratings by V.
>
> The diagram in figure one is hideously confusing because it looks like a
> picture of some kind of multiplication whereas it is really depicting some
> odd kind of flow diagram.
>
> Does this solve the problem?
>
> On Thu, Jun 3, 2010 at 9:26 AM, Sean Owen <srowen@gmail.com> wrote:
>
>> Section 3 is hard to understand.
>>
>> - Ak and P are defined, but not used later
>> - Definition of P has UTk x Nu as a computation. UTk is a k x m
>> matrix, and Nu is "t" x 1. t is not defined.
>> - This only makes sense if t = m. But m is the number of users, and Nu
>> is a user vector, so should have a number of elements equal to n, the
>> number of items
>> - Sk * VTk is described as a k x "d" matrix but d is undefined
>> - The diagram suggests that VTk are multiplied by all the Nu, which
>> makes more sense -- but only if Nu are multiplied by VTk, not the
>> other way. And the diagram depicts neither of those.
>> - Conceptually I would understand Nu x VTk, but then P is defined by
>> an additional product with Uk
>>
>> In short... what?
>>
>>
>> On Thu, Jun 3, 2010 at 4:15 PM, Ted Dunning <ted.dunning@gmail.com> wrote:
>> > Fire away.
>> >
>> > On Thu, Jun 3, 2010 at 3:52 AM, Sean Owen <srowen@gmail.com> wrote:
>> >
>> >> Is anyone out there familiar enough with this to a) discuss this paper
>> >> with me or b) point me to another writeup on the approach?
>> >>
>> >
>>
>

Mime
View raw message