commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gilles Sadowski <>
Subject Re: [math] OpenGamma library
Date Sat, 15 Oct 2011 00:28:50 GMT
> [...]
> I think that there was an important remark in the paper referred to in this
> thread (2nd paragraph, page 10) saying (IIUC) that changing the storage
> layout, from 2D to 1D, effectively led to a speed improvement *only* for
> matrices of sizes larger than 1010.  Which leads me to think that such
> changes are certainly not always worth it. Do we really want a code more
> difficult to understand and maintain on the ground that it will be faster
> for "large" matrices. What if some CM users are only interested in "small"
> matrices?

My own benchmark shows (cf. the "ratio" column in the following table) that
there is already an improvement for a matrix of size 303 (instead of 1010):

operate (calls per timed block: 10000, timed blocks: 100, time unit: ms)
        name      time/call      std error total time      ratio difference
Commons Math 1.19667835e-01 2.70020491e-04 1.1967e+05 1.0000e+00 0.00000000e+00
OpenGamma 1D 1.04407216e-01 2.56229711e-04 1.0441e+05 8.7248e-01 -1.52606192e+04
OpenGamma 2D 1.13865944e-01 1.82887474e-04 1.1387e+05 9.5152e-01 -5.80189118e+03

However it also shows that the improvement is only ~13% instead of the ~30%
reported by the benchmark in the paper...

I don't think that CM development should be focused on performance
improvements that are so sensitive to the actual hardware (if it's indeed
the varying amount of CPU cache that is responsible for this discrepancy).

If there are (human) resources inclined to rewrite CM algorithms in order to
boost performance, I'd suggest to also explore the multi-threading route, as
I feel that the type of optimizations described in this paper are more in the
realm of the JVM itself.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message