Return-Path: Delivered-To: apmail-lucene-mahout-dev-archive@minotaur.apache.org Received: (qmail 58519 invoked from network); 20 Jan 2010 07:00:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 20 Jan 2010 07:00:34 -0000 Received: (qmail 15526 invoked by uid 500); 20 Jan 2010 07:00:33 -0000 Delivered-To: apmail-lucene-mahout-dev-archive@lucene.apache.org Received: (qmail 15464 invoked by uid 500); 20 Jan 2010 07:00:33 -0000 Mailing-List: contact mahout-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mahout-dev@lucene.apache.org Delivered-To: mailing list mahout-dev@lucene.apache.org Received: (qmail 15454 invoked by uid 99); 20 Jan 2010 07:00:33 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Jan 2010 07:00:33 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Jan 2010 07:00:24 +0000 Received: from brutus.apache.org (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id CCFE4234C4AF for ; Tue, 19 Jan 2010 23:00:03 -0800 (PST) Message-ID: <1656458466.362951263970803838.JavaMail.jira@brutus.apache.org> Date: Wed, 20 Jan 2010 07:00:03 +0000 (UTC) From: "Jake Mannix (JIRA)" To: mahout-dev@lucene.apache.org Subject: [jira] Commented: (MAHOUT-180) port Hadoop-ified Lanczos SVD implementation from decomposer In-Reply-To: <1885778826.1253908875980.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/MAHOUT-180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12802710#action_12802710 ] Jake Mannix commented on MAHOUT-180: ------------------------------------ Jeepers, for performance, I had switched from using SparseRowMatrix to DenseMatrix in a few places, and suddenly failed to keep orthogonality. Why? Because of this ugliness I thought was long since fixed, in DenseMatrix: {code} @Override public Vector getRow(int row) { if (row < 0 || row >= rowSize()) { throw new IndexException(); } return new DenseVector(values[row]); } {code} The lovely bug here? This is a full deep copy of the row, not a shallow view which allows you to mutate the original matrix! Arrrrrggggg! I swear there was already a bug filed and fixed regarding this. It's easy to do, for this method (make a "shallow" constructor for DenseVector, and use it here). The right fix also takes care of getColumn (which requires a little more work, but not much). > port Hadoop-ified Lanczos SVD implementation from decomposer > ------------------------------------------------------------ > > Key: MAHOUT-180 > URL: https://issues.apache.org/jira/browse/MAHOUT-180 > Project: Mahout > Issue Type: New Feature > Components: Math > Affects Versions: 0.2 > Reporter: Jake Mannix > Assignee: Jake Mannix > Priority: Minor > Fix For: 0.3 > > Attachments: MAHOUT-180.patch > > > I wrote up a hadoop version of the Lanczos algorithm for performing SVD on sparse matrices available at http://decomposer.googlecode.com/, which is Apache-licensed, and I'm willing to donate it. I'll have to port over the implementation to use Mahout vectors, or else add in these vectors as well. > Current issues with the decomposer implementation include: if your matrix is really big, you need to re-normalize before decomposition: find the largest eigenvalue first, and divide all your rows by that value, then decompose, or else you'll blow over Double.MAX_VALUE once you've run too many iterations (the L^2 norm of intermediate vectors grows roughly as (largest-eigenvalue)^(num-eigenvalues-found-so-far), so losing precision on the lower end is better than blowing over MAX_VALUE). When this is ported to Mahout, we should add in the capability to do this automatically (run a couple iterations to find the largest eigenvalue, save that, then iterate while scaling vectors by 1/max_eigenvalue). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.