commons-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luc Maisonobe <>
Subject Re: [math] Re: [LINEAR] Performance and bugs of 2.0 library
Date Tue, 02 Feb 2010 22:11:01 GMT
Peter A a écrit :
> Luc,
> Do you think this would be fair?  For posting "official" benchmark results
> using MatrixUtils.createRealMatrix() to declare the matrices.

I think this could be a good approach. This way official benchmak would
reflect what users would really experience. If the simplistic rathionale
behind the factory is bad, then they will see it and it is a good thing.

> I'm not a
> huge fan of having multiple results for a single library if it can be
> avoided.  I'm worried about needing to benchmark every permutation that each
> library offers.   All those permutations would make the charts even harder
> to understand and make the benchmark take even longer to run.  Currently it
> takes about 35 hours to run everything from scratch.

I understand your concerns.

> Internally there will be two additional factories that can be added.  These
> will use either the block format or 2d array exclusively.  Results from
> those wouldn't be displayed on the official results page, but could be used
> by the commons-math developers.

Yes, this would be an interesting result for us. I also guess once we
have sorted the current bugs in our code, we could also add our own
factories and run the benchmarks ourselves to compare different
implementation choices. For now, we have to solve our SVD issues, we
have to make the code run before making it run fast.

> There is also a group of us discussing benchmarking of linear algebra
> libraries.  If you're interested I can add you to that list.  Its composed
> of other library developers.

Yes, I would like to read he discussions. I'm not sure I will
participate myself very often, though, because I am quite busy.

> Looking at stability results more carefully, SVD did get better in 2.1a, but
> appears to have accuracy issues that I guess you are already aware of.

I'm not sure we are aware of everything. If you don't mind, please open
a JIRA issue at <>. We will
look at it and close it if it is related to an already known problem. If
it is yet unknown to us, we will have a new itch to scratch.


> - Peter
> On Tue, Feb 2, 2010 at 2:44 PM, Luc Maisonobe <> wrote:
>> Peter A a écrit :
>>> All,
>>> I posted the new stability and runtime performance benchmarks at:
>>> This includes the 2.1a SVN code from last Friday.  I don't really see
>> much
>>> of a change since 2.0.
>> From a pure performance point, there should not be any differences. The
>> expected changes are more that some (not all ...) SVD cases are now
>> handled properly (or at least compute something).
>>> If a commons-math developer has some time it would
>>> be helpful if he/she/it could take a look at:
>>> and tell me if I'm testing commons-math correctly.
>> You should probably add a separate benchmark using Array2DRowRealMatrix
>> instead of BlockRealMatrix. Depending on the dimension and on the
>> operation, one matrix type may be better suited than another one. In
>> fact I also wonder if we should not add another type someday with a one
>> dimension array as suggested some months ago on this list.
>> The current suggested way to use one type or the other is to rely on
>> MatrixUtils.createRealMatrix(rows, columns). The method simply check
>> whether the number of elements is greater than 4096 or not. Small
>> matrices are created as Array2DRealMatrix, larger matrices are created
>> as BlockRealMatrix. This choice was OK for simple operations (multiply,
>> add, transpose ...) on my machine this summer, but is certainly not a
>> good general choice. Having separate reliable benchmarks from your tool
>> would allow to improve at least documentation and hints to user on which
>> type they should use depending on the most costly operation they perform.
>> Luc
>>> - Peter
>>> On Sat, Jan 30, 2010 at 1:58 AM, Ted Dunning <>
>> wrote:
>>>> This comparison is also confounded by the fact that most C++ libraries
>> try
>>>> to make use of native binary libraries such as ATLAS and often get a
>>>> dramatic speedup as a result.
>>>> On Fri, Jan 29, 2010 at 4:55 PM, Peter Abeles <
>>>>> wrote:
>>>>> I have seen some adhoc comparisons on-line. Mostly just matrix
>> multiply.
>>>>> Having said that I wouldn't be surprised if I missed something.  Based
>>>>> on personal experience I would expect about a 2-3 times speed hit
>>>>> between well written java and c/c++ code because of array overhead and
>>>>> language constraints.  For pure arithmetic I have gotten nearly
>>>>> identical performance.
>>>> --
>>>> Ted Dunning, CTO
>>>> DeepDyve
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail:
>> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message