mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: svd algorithms
Date Sat, 06 Mar 2010 01:29:52 GMT
Mike,

http://issues.apache.org/jira/browse/MAHOUT-180 might be of interest.  Jake
has done a fair bit of work beyond that.

Next up is a stochastic decomposition version.  You can see the seeds of
that in Jake's other JIRA's.

On Fri, Mar 5, 2010 at 5:05 PM, mike bowles <mike@mbowles.com> wrote:

> ...  I thought it might be helpful to share what I've found so far.
>

Sharing is good.


> Really large matrices require using one of the randomizing methods to get
> done.  The methods I've seen reduce one dimension of the problem, but if
> singular values and/or left/right singular vectors are required, then an
> svd
> remains to be done.


That SVD is, however, quite small and very fast compared to the extraction
of the small problem.  We have OK code for doing a small dense or
tridiagonal decomposition.

 Lanczos Algorithm - The Lanczos algorithm is covered pretty well in Chapter
> 9 of (golub and van loan "Matrix Computations").


Also nicely covered on wikipedia.


> The Lanczos algorithm
> constructs an orthogonal basis set for the Krylov spaces (e.g. the space
> spanned by (x, Ax, A^x,..)).  The basic Lanczos algorithm is only suited to
> symmetric matrices and has some numerical difficulties maintaining
>
>
I think that Arnoldi's algorithm that you refer to is equivalent to the
decomposition of the symmetric matrix:

      [ 0   A' ]
      [ A   0  ]

Jake takes the slightly less stable approach of decomposing A'A.

For limited numbers of singular values, both work pretty well and in our
current regime, numerical problems are dominated by errors in the inputs and
the breathtaking reduction in dimensionality that we are using.



>
> Divide and Conquer - A more current favorite is the "divide and conquer"
> algorithm is also described in "matrix computations".  In Section  8.54 (in
> the third edition - which is current one), golub shows how to apply a
> perturbation to a tridiagonal matrix in order to break it into a block
> diagonal matrix with two independent diagonal blocks.


I think that these techniques are most useful in the small.  Getting the
full tri-diagonalization is not something that can be considered for the
massive datasets that we are are talking about.  As such, I am not sure how
much dividing and conquering of that tridiagonal is worth.  Or I may be
missing your point.


> Non-square matrices - All of the methods I've found for actually producing
> singular values or singular vectors are targeted at symmetric matrices or
> at
> most square unsymmetric ones (the arnoldi algorithm).  I haven't seen
> anything that attacks the non-square matrix that inevitably arises in
> latent
> semantic analysis (for example).


Golub and van Loan describe the method I alluded to above.  Basically just
create a symmetric matrix by stacking the matrix and its transpose along
with a good measure of zeros.


> One approach that comes to mind is the
> following.  Suppose we have an mxn matrix M that needs svd'ing.  It's easy
> to show that MM^T has the same LHS singular vectors as M.


Yes.

And as you say next,


> ...Typically one of these is relatively small (on the order of the number
> of singular
> values in the final approximation squared (or so) or klog(k) for better
> randomized methods).  The other dimension is relatively large (e.g. the
> number of words or tokens that are being counted).
>

One addition that Jake has mentioned is that for a row accessible matrix,
you can accumulate R' M' M R in a single pass through M where M is kinda
square and really big and R is a tall skinny matrix.  Decomposing this small
matrix gives you the information needed to reconstruct left singular vectors
and singular values of M with just one more pass through M.



-- 
Ted Dunning, CTO
DeepDyve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message