commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gilles Sadowski <>
Subject Re: [math] OpenGamma library
Date Fri, 14 Oct 2011 23:55:39 GMT

> > [...]
> >
> > I might have missed the goal of your proposal but I think that main point
> > of the discussion had been about having a separate class for operations. I
> > don't recall that a new implementation ("SymmetricMatrix") with a
> > specifically optimized storage was not approved.
> >
> > In fact, it has been a few weeks that I wanted to ask whether someone would
> > be interested in providing a symmetric matrix; the primary motivation for
> > me
> > would be that it will simplify some code in "BOBYQAOptimizer".
> > I see no reason why you would not be welcome to create a new
> > "SymmetricMatrix" class.
> >
> I would be delighted to collaborate.

As I said, I would be an interested *user* of the symmetric matrix
implementation. ;-)

> The proposal was not denied, but the
> response was overwhelmingly negative. Perhaps we can start a new discussion
> so I don't start top posting... ;-)


> As I mentioned already, I am very
> impressed by BOBYQA.

I guess that you mean "impressed by [the performance]".
I am worried about the code complexity, for long term maintainance. When
including the C-to-Java translation, it was on the ground that it would be
transformed into understandable Java code.

> [...] 
> Additionally, I was hoping you would give your thoughts on the BlockMatrix
> design.

I have no particular thoughts about it at the moment.
I agree that this implementation should not be thrown away unless all
operations are proven (benchmarks?) to be always less efficient than
alternative implementations.[1] But this discussion should also go into its
own thread...

I think that there was an important remark in the paper referred to in this
thread (2nd paragraph, page 10) saying (IIUC) that changing the storage
layout, from 2D to 1D, effectively led to a speed improvement *only* for
matrices of sizes larger than 1010.  Which leads me to think that such
changes are certainly not always worth it. Do we really want a code more
difficult to understand and maintain on the ground that it will be faster
for "large" matrices. What if some CM users are only interested in "small"

Best regards,

[1] Actually I've micro-benchmarked square matrix multiplication and, at
    least up to size 100, "Array2DRowRealMatrix" (the *new* version of
    "multiply", inspired from Jama) was faster than "BlockMatrix"...

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message