commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bruce A Johnson <>
Subject Re: [math] Major speed improvement to 1D FFT (MATH-732)
Date Tue, 07 Feb 2012 12:17:48 GMT

On Feb 7, 2012, at 6:34 AM, Gilles Sadowski wrote:

> Hello.
>> Hello S├ębastien,
>>> Kurt has recently proposed a patch for 1D-FFT which is much faster
>>> than our current impl, while still passing the tests (of course).
>>> Am I likely to hurt anyone's feelings if I replace the old impl by the new impl?
>> Please, go ahead, a faster implementation is always good, as long as it
>> can be maintained.
>>> Also, I think that one of the reasons why our previous implementation
>>> was so slow is because it used internally Complex objects. Since these
>>> objects are immutable, we end up creating *a lot* of small objects.
>>> Apparently, the GC and JIT are not so good at that ;) (not being an
>>> expert, I'm not sure that the culprit is indeed the JIT or the GC...
>>> but surely someone is to be blamed).
>> This seems strange to me. Lots of improvements have been added to Java 5
>> and Java 6 about JVM optimization, and small objects are not really
>> costly now. I do trust JVM implementors here and really like using small
>> immutable objects.
> The difference might be that there is a slight overhead in creating objects
> (however small) with respect to using arrays of primitives in this scenario.
> Trying to borrow from your explanation about cache-friendly implementations:
> the cache will contain a larger chunk of an array of "double" than an array
> of "Complex" (?).
> Also, when the Complex objects array are split into their real an imaginary
> constituent arrays, the computation are performed with "double", thus
> bypassing method calls ("add", "subtract") and their possible precondition
> checks.
>>> I remember we recently had this conversation on the ML: one of Gilles
>>> or Luc argued that for very low-level algorithms, it didn't really
>>> matter if the parameters were passed as "very crude" objects, since it
>>> was most likely that the user would have to write an adapter to its
>>> own data format. So I would suggest we give up Complex[] altogether in
>>> the interface of FFT, and replace it with double[] arrays, with the
>>> following layout :
>>> data[2 * i] = real part
>>> data[2 * i + 1] = imaginary part.
>>> What do you think?
>> I agree with this view (it may be me who expressed this). A double array
>> as an interface for our library seems good to me.
> I'm wary of this sort of "optimization" (as it decreases the code clarity).
> -1 until it has been proven that it brings a _significant_ performance
>   improvement.
> At the moment, I would of course keep the internal switch to arrays of
> primitives but also the API that takes and returns "Complex[]" objects.
> A change[1] that might bring an added performance improvement is by
> combining the "getRealArray()" and "getImaginartArray()" methods into a
> single one, in order to loop only once over the array of "Complex" objects:
> ---CUT---
> public static double[][] getRealAndImaginaryArrays(Complex[] dataC) {
>    final double[][] dataRI = new double[2][dataC.length];
>    final double[] dataR = dataRI[0];
>    final double[] dataI = dataRI[1];
>    for (int i = 0; i < dataC.length; i++) {
>        final Complex c = dataC[i];
>        dataR[i] = c.getReal();
>        dataI[i] = c.getImaginary();
>    }
>    return dataRI;
> }
> ---CUT---
> Then, for "transform" we would have:
> ---CUT---
> public Complex[] transform(Complex[] f) {
>    final double[][] dataRI = getRealAndImaginaryArrays(f);
>    transformInPlace(dataRI[0], dataRI[1], false);
>    // etc.
> }
> ---CUT---
> and similarly for the other methods.
> Two other points, unrelated to the above:
> 1. I think that the "scaleArray" method (the version that takes a "double"
>   argument) should be moved to the "util.MathArrays" class (and renamed
>   "scale").
> 2. Why are some methods "protected" (e.g. "fft")? IMHO, they should be
>   "private".
> Best,
> Gilles
> [1] This is untested.

I've been testing the new faster code in a real world application and I'm a big fan.  But,
I too would be hesitant to change the API.  There are several ways it could be changed:
transform (double[] real_imag)


transform(double[] real, double[] imag)


transform(ComplexVector cvec)

(where ComplexVector might internally be implemented as either a real and imaginary double
array, or the alternating real/imag array, or an array of Complex).

Choosing among these and other alternatives (including the current) might best be left to
a post-3.0 discussion of optimal handling of Complex arrays in general.


> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message