mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Traupman (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAHOUT-672) Implementation of Conjugate Gradient for solving large linear systems
Date Mon, 09 May 2011 06:00:06 GMT

     [ https://issues.apache.org/jira/browse/MAHOUT-672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jonathan Traupman updated MAHOUT-672:
-------------------------------------

    Status: Patch Available  (was: Open)

Use the "mahout-672-combined.patch" attachment and ignore the rest.


> Implementation of Conjugate Gradient for solving large linear systems
> ---------------------------------------------------------------------
>
>                 Key: MAHOUT-672
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-672
>             Project: Mahout
>          Issue Type: New Feature
>          Components: Math
>    Affects Versions: 0.5
>            Reporter: Jonathan Traupman
>            Priority: Minor
>             Fix For: 0.6
>
>         Attachments: 0001-MAHOUT-672-LSMR-iterative-linear-solver.patch, 0001-MAHOUT-672-LSMR-iterative-linear-solver.patch,
MAHOUT-672.patch, mahout-672-combined.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> This patch contains an implementation of conjugate gradient, an iterative algorithm for
solving large linear systems. In particular, it is well suited for large sparse systems where
a traditional QR or Cholesky decomposition is infeasible. Conjugate gradient only works for
matrices that are square, symmetric, and positive definite (basically the same types where
Cholesky decomposition is applicable). Systems like these commonly occur in statistics and
machine learning problems (e.g. regression). 
> Both a standard (in memory) solver and a distributed hadoop-based solver (basically the
standard solver run using a DistributedRowMatrix a la DistributedLanczosSolver) are included.
> There is already a version of this algorithm in taste package, but it doesn't operate
on standard mahout matrix/vector objects, nor does it implement a distributed version. I believe
this implementation will be more generically useful to the community than the specialized
one in taste.
> This implementation solves the following types of systems:
> Ax = b, where A is square, symmetric, and positive definite
> A'Ax = b where A is arbitrary but A'A is positive definite. Directly solving this system
is more efficient than computing A'A explicitly then solving.
> (A + lambda * I)x = b and (A'A + lambda * I)x = b, for systems where A or A'A is singular
and/or not full rank. This occurs commonly if A is large and sparse. Solving a system of this
form is used, for example, in ridge regression.
> In addition to the normal conjugate gradient solver, this implementation also handles
preconditioning, and has a sample Jacobi preconditioner included as an example. More work
will be needed to build more advanced preconditioners if desired.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message