commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: [math] optimization refactoring
Date Mon, 23 Feb 2009 00:28:23 GMT
I think that Dmitri overstates his case a bit.

This multiplication in observation space works for some algorithms, not for
others.  Ordinary least squares regression is somewhat of an exception
here.  Logistic regression is a simple counter-example.

It is still useful to have a vector weight and it helps users.  It may be
useful in some situations to also all a full correlation matrix, but I
haven't had a need for that yet.

On Sun, Feb 22, 2009 at 11:24 AM, Dimitri Pourbaix <pourbaix@astro.ulb.ac.be
> wrote:

> Either one considers the full weighting matrix (including potential
> correlation between observations) or one does not account for any weight
> at all.  By premultiplying both the function matrix and the observation
> vector  by the square root of the weight matrix, one can forget about it
> completely in the rest of the computation.
>



-- 
Ted Dunning, CTO
DeepDyve

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message