mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Lyubimov <>
Subject LatentLogLinear code
Date Wed, 25 May 2011 00:12:39 GMT

so you do in-memory latent factor computation? I think this is the
same technique Koren implied for learning latent factors.

However, i never understood why this factorization must come up with r
best factors. I understand incremental SVD approach
(essentially the same thing except learning factors iteratively
guarantees we capture the best ones) but if we do it all in parallel,
does it create any good in your trials?

also i thought that cold start problem is helped by the fact that we
learn weights first so they always give independent best
approximation, and then user-item interactions reveal specific about
user and item. However, if we learn them all at the same time, it does
not seem obvious to me that we'd be learning best approximation when
latent factors are unkown
(new users). Also, in that implementation i can't see side info
training at all -- is it there?


View raw message