commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Phil Steitz <>
Subject Re: [math] [GUMP@vmgump]: Project commons-math (in module apache-commons) failed
Date Tue, 17 May 2011 02:12:22 GMT
On 5/16/11 3:47 PM, Gilles Sadowski wrote:
> On Mon, May 16, 2011 at 02:39:01PM -0700, Phil Steitz wrote:
>> On 5/16/11 3:44 AM, Dr. Dietmar Wolz wrote:
>>> Nikolaus Hansen, Luc and me discussed this issue in Toulouse.
> Reading that, I've been assuming that...
>>> We have two options to handle this kind of failure in tests of stochastic
>>> optimization algorithms:
>>> 1) fixed random seed - but this reduces  the value of the test 
>>> 2) Using the RetryRunner - preferred solution
>>> @Retry(3) should be sufficient for all tests.
>> The problem with that is that it is really equivalent to just
>> reducing the sensitivity of the test to sensitivity^3 (if, e.g, the
>> test will pick up anomalies with stochastic probability of less than
>> alpha as is, making it retry three times really just reduces that
>> sensitivity to alpha^3).  I think the right answer here is to find
>> out why the test is failing with higher than, say .001 probability
>> and fix the underlying problem.  If the test itself is too
>> sensitive, then we should fix that.  Then switch to a fixed seed for
>> the released code, reverting to random seeding when the code is
>> under development.
> ... they had settled on the best approach for the class at hand.

Whatever rationale was discussed should be summarized here, on the
public list.
> [I.e. we had raised the possibility that there could a bug in the code that
> triggered test failures, but IIUC they now concluded that the code is fine
> and that failures are expected to happen sometimes.]

I would like to understand better why that is the case.  If failures
happen sometimes in test, does that means that bad results are
expected to be returned sometimes?  If so, have we documented that?

> It still seems strange that it is always the same 2 tests that fail.
> Is there an explanation to this behaviour, that we might add as a comment
> in the test code?

I agree here, and possibly in the javadoc for the application code. 
If the code is prone to generating spurious results sometimes, we
need to make that clear in the javadoc.

> Gilles
> ---------------------------------------------------------------------
> To unsubscribe, e-mail:
> For additional commands, e-mail:

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message