mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jake Mannix (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAHOUT-180) port Hadoop-ified Lanczos SVD implementation from decomposer
Date Wed, 20 Jan 2010 07:00:03 GMT

    [ https://issues.apache.org/jira/browse/MAHOUT-180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12802710#action_12802710
] 

Jake Mannix commented on MAHOUT-180:
------------------------------------

Jeepers, for performance, I had switched from using SparseRowMatrix to DenseMatrix in a few
places, and suddenly failed to keep orthogonality.  Why?  Because of this ugliness I thought
was long since fixed, in DenseMatrix:

{code}
  @Override
  public Vector getRow(int row) {
    if (row < 0 || row >= rowSize()) {
      throw new IndexException();
    }
    return new DenseVector(values[row]);
  }
{code}

The lovely bug here?  This is a full deep copy of the row, not a shallow view which allows
you to mutate the original matrix!  Arrrrrggggg!  I swear there was already a bug filed and
fixed regarding this.  It's easy to do, for this method (make a "shallow" constructor for
DenseVector, and use it here).  The right fix also takes care of getColumn (which requires
a little more work, but not much).

> port Hadoop-ified Lanczos SVD implementation from decomposer
> ------------------------------------------------------------
>
>                 Key: MAHOUT-180
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-180
>             Project: Mahout
>          Issue Type: New Feature
>          Components: Math
>    Affects Versions: 0.2
>            Reporter: Jake Mannix
>            Assignee: Jake Mannix
>            Priority: Minor
>             Fix For: 0.3
>
>         Attachments: MAHOUT-180.patch
>
>
> I wrote up a hadoop version of the Lanczos algorithm for performing SVD on sparse matrices
available at http://decomposer.googlecode.com/, which is Apache-licensed, and I'm willing
to donate it.  I'll have to port over the implementation to use Mahout vectors, or else add
in these vectors as well.
> Current issues with the decomposer implementation include: if your matrix is really big,
you need to re-normalize before decomposition: find the largest eigenvalue first, and divide
all your rows by that value, then decompose, or else you'll blow over Double.MAX_VALUE once
you've run too many iterations (the L^2 norm of intermediate vectors grows roughly as (largest-eigenvalue)^(num-eigenvalues-found-so-far),
so losing precision on the lower end is better than blowing over MAX_VALUE).  When this is
ported to Mahout, we should add in the capability to do this automatically (run a couple iterations
to find the largest eigenvalue, save that, then iterate while scaling vectors by 1/max_eigenvalue).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message