commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luc Maisonobe <Luc.Maison...@free.fr>
Subject Re: [math] Q-R -decomposition
Date Sun, 21 May 2006 23:06:16 GMT

>> > Beyond what is available in the API (Q and R), what exactly does the
>> > QR Decomp "know" that makes solving easier?

foreword :
what is stated below is oriented only for least squares problems, it is 
not a general discussion about decomposition algorithms.

When using QR decomposition in least squares problems, you NEVER really 
compute Q explicitely, and if you don't retrieve Q, you don't retrieve R 
either. The decomposition algorithm keeps some information about Q (the 
Householder vectors, but also some coefficients and permutation indices 
if you want to support rank-deficient problems) and you use this 
information to compute transpose(Q).Q.V for some vector V without 
computing Q itself, and it uses R internally also to provide some higher 
level result, not Q and R to let you compute something with them. Q is a 
huge matrix, much larger than the Householder vectors it can be deduced 
from. This is especially true when the problem as a few parameters but a 
lot of measurements (in orbit determination problems, for example, you 
often have less than 10 parameters in the model and several tens of 
thousands of measurements).

What makes least squares solving easier is not the QR decomposition 
itself, but the way it is used in the surrounding algorithm 
(Levenberg-Marquardt for example). In this case, you do NOT compute the 
normal equations (i.e. transpose(J).J where J is the jacobian matrix) 
and decompose the resulting square matrix like you would do in a 
Gauss-Newton solver. You decompose the jacobian matrix itself (this is 
the reason for the transpose(Q).Q.V part of the solver). Both 
decomposition are therefore not used the same way.

The QR way is more accurate because there are situations where the 
squaring of the jacobian matrix involved in the normal equations 
computation leads to cancellation of some tiny values (epsilon -> 
epsilon^2). For difficult problems, this is really important.

On the other hand, using LU has other benefits. First, you may build the 
normal equations iteratively (i.e. build Jt.J by updating a matrix as 
measurements are considered one after the other) and second, the matrix 
size is small (mXm where m is the number of parameters of the model), 
which is smaller than the nXm matrix appearing in the QR decomposition 
(but beware, nobody really computes the nXn Q matrix). QR decomposition 
involves about twice the number of operations the LU decomposition.

So the choice is size versus accuracy for simple problems, and you may 
only choose accuracy for difficult problems, as the other way 
alternative simply fails. For optimization problems, the 
Levenberg-Marquardt algorithm (which uses a QR decomposition as one of 
the many parts of the algorithm) is the method of choice you will find 
in many applications. It works really well and few people bother 
studying really what is the better alternative.

In any case, for least squares problems, the decomposition used is an 
implementation choice and the user doesn't really need to see the 
intermediate steps (building J or not, decomposing Jt.J using LU or J 
using QR, applying residuals, updating the parameters). He chooses one 
method or the other and get the final result.

Luc


---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Mime
View raw message