mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: Transposing a matrix is limited by how large a node is.
Date Fri, 06 May 2011 15:18:23 GMT
If you have the code and would like to contribute it, file a JIRA and attach
a patch.

It will be interesting to hear how the SVD proceeds.  Such a large dense
matrix is an unusual target for SVD.

Also, it is possible to adapt the R version of random projection to never
keep all of the large matrix in memory.  Instead, only slices of the matrix
are kept and the multiplications involved are done progressively.  The
results are kept in memory, but not the large matrix.  This would probably
make your sequential version fast enough to use.  R may not be usable unless
it can read the portions of your large matrix quickly using binary I/O.

Also, I suspect that you are trying to get the transpose in order to
decompose A' A.  This is not necessary as far as I can tell since you can
simply decompose A and use that to compute the decomposition of A' A even
faster than you can compute the decomposition of A itself.

On Fri, May 6, 2011 at 7:36 AM, Vincent Xue <xue.vin@gmail.com> wrote:

> Because I am limited by my resources, I  coded up a slower but effective
> implementation of the transpose job that I could share. It avoids loading
> all the data on to one node by transposing the matrix in pieces. The
> slowest
> part of this is combining the pieces back to one matrix. :(
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message