On Sat, Jan 11, 2014 at 1:31 PM, Klausen Schaefersinho <
klaus.schaefers@gmail.com> wrote:
> @Ted: Thanks for your great response. Just one little question. With
>
> > cooccurrence analysis and is focused on sparsification of the
> cooccurrence matrix to produce an indicator matrix
>
> you mean things like useritem or item.item methods?
>
In this context, linear approximations apply and the distinction between
usercentric and itemcentric recommendation are nearly meaningless.
To see why, you can examine how a user oriented recommender using Euclidean
or cosine distance finds users. The user scores will be A h where A is the
suitably weighted history matrix and h is the current history. The vector
h is a vector of item weights or counts. The matrix A is a user by item
matrix of weights. The product A h is a weighted list of users with the
most similar users having the highest scores.
If you use that list of scored users to find similar items, you are
essentially computing A^T (A h). This can be rearranged due to
associativity as (A^T A) h. The form in the parentheses is the
cooccurrence matrix and in this form, we have itembased recommendations as
opposed to userbased recommendations.
In cooccurrence driven recommenders, what happens is that A is taken as a
binary matrix of interactions and the actual recommendation is computed
using LLR(A' A) w h where LLR is a sparse binary form derived from
examination of A'A, and w are weights that depend on the column sums of
A'A. Typically, the computation of LLR(A'A) also involves downsampling
columns and rows of A to ensure that the product A'A can be computed
efficiently.
But this is veering far from Storm.
