# storm-user mailing list archives

##### Site index · List index
Message view
Top
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: Recommender Engines on top of Storm
Date Sat, 11 Jan 2014 22:02:22 GMT
```On Sat, Jan 11, 2014 at 1:31 PM, Klausen Schaefersinho <
klaus.schaefers@gmail.com> wrote:

> @Ted: Thanks for your great response. Just one little question. With
>
> >  cooccurrence analysis and is focused on sparsification of the
> cooccurrence matrix to produce an indicator matrix
>
> you mean things like user-item or item.item methods?
>

In this context, linear approximations apply and the distinction between
user-centric and item-centric recommendation are nearly meaningless.

To see why, you can examine how a user oriented recommender using Euclidean
or cosine distance finds users.  The user scores will be A h where A is the
suitably weighted history matrix and h is the current history.  The vector
h is a vector of item weights or counts.  The matrix A is a user by item
matrix of weights.  The product A h is a weighted list of users with the
most similar users having the highest scores.

If you use that list of scored users to find similar items, you are
essentially computing A^T (A h).  This can be rearranged due to
associativity as (A^T A) h.  The form in the parentheses is the
cooccurrence matrix and in this form, we have item-based recommendations as
opposed to user-based recommendations.

In cooccurrence driven recommenders, what happens is that A is taken as a
binary matrix of interactions and the actual recommendation is computed
using LLR(A' A) w h where LLR is a sparse binary form derived from
examination of A'A, and w are weights that depend on the column sums of
A'A.  Typically, the computation of LLR(A'A) also involves down-sampling
columns and rows of A to ensure that the product A'A can be computed
efficiently.

But this is veering far from Storm.

```
Mime
View raw message