commons-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gilles (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MATH-621) BOBYQA is missing in optimization
Date Fri, 21 Oct 2011 15:48:34 GMT

    [ https://issues.apache.org/jira/browse/MATH-621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13132762#comment-13132762
] 

Gilles commented on MATH-621:
-----------------------------

"logically equivalent" == "mathematically equivalent"

What I'm asking is whether the test failures are _meaningful_.
I understand that one cannot expect the numbers to be the same all through the last decimal
places when reordering some operations. When such a case arises, we should probably increase
the tolerances so that the test passes.
But I'm wondering whether it is normal that reordering should lead to an increase of the number
of evaluations.

I think that the accuracy thresholds should take rounding into account, in the sense that
the results of two logically/mathematically equivalent computations should be considered equal
(unless there is an intrinsic feature of the algorithm causing "really" different results,
in which case a comment should make it clear).
In this instance,
{code}
 a + 2 * dx
{code}
and
{code}
 a + dx + dx
{code}
give different results.
One explanation could be that "a + dx" is still "a". But IMO, that means that the algorithm
is fragile: An addition was meant but nothing has actually happened. Hence, I'd tend to say
that further computations are doubtful...
That's what I mean by "detect that the numerical procedure is in trouble"; the input data
(e.g. tolerance value) leads it to ineffectiveness, which should be detected and reported
as such.

In fact, I thought that the unit tests came from an original test suite used by BOBYQA's author!
Does such a suite exist?
Alternatively, (related to your point 2), we can try and set up our own suite using "standard"
problems; there has been some attempt in this sense with the "BatteryNISTTest" class introduced
recently. This class already led to exercising a code path not covered by the existing tests;
however, I was hoping that someone would be more systematic in the selection of a test suite
of "well-known" (by the optimization community) problems.
Of course, this is going back to the discussion we had a few weeks ago: Do we wait for a hypothetical
expert, or do we do something now?

                
> BOBYQA is missing in optimization
> ---------------------------------
>
>                 Key: MATH-621
>                 URL: https://issues.apache.org/jira/browse/MATH-621
>             Project: Commons Math
>          Issue Type: New Feature
>    Affects Versions: 3.0
>            Reporter: Dr. Dietmar Wolz
>             Fix For: 3.0
>
>         Attachments: BOBYQA.math.patch, BOBYQA.v02.math.patch, BOBYQAOptimizer.java.patch,
BOBYQAOptimizer0.4.zip, bobyqa.zip, bobyqa_convert.pl, bobyqaoptimizer0.4.zip, bobyqav0.3.zip
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> During experiments with space flight trajectory optimizations I recently
> observed, that the direct optimization algorithm BOBYQA
> http://plato.asu.edu/ftp/other_software/bobyqa.zip
> from Mike Powell is significantly better than the simple Powell algorithm
> already in commons.math. It uses significantly lower function calls and is
> more reliable for high dimensional problems. You can replace CMA-ES in many
> more application cases by BOBYQA than by the simple Powell optimizer.
> I would like to contribute a Java port of the algorithm.
> I maintained the structure of the original FORTRAN code, so the
> code is fast but not very nice.
> License status: Michael Powell has sent the agreement via snail mail
> - it hasn't arrived yet.
> Progress: The attached patch relative to the trunk contains both the
> optimizer and the related unit tests - which are all green now.  
> Performance:
> Performance difference (number of function evaluations)
> PowellOptimizer / BOBYQA for different test functions (taken from
> the unit test of BOBYQA, dimension=13 for most of the
> tests. 
> Rosen = 9350 / 1283
> MinusElli = 118 / 59
> Elli = 223 / 58
> ElliRotated = 8626 / 1379
> Cigar = 353 / 60
> TwoAxes = 223 / 66
> CigTab = 362 / 60
> Sphere = 223 / 58
> Tablet = 223 / 58
> DiffPow = 421 / 928
> SsDiffPow = 614 / 219
> Ackley = 757 / 97
> Rastrigin = 340 / 64
> The number for DiffPow should be dicussed with Michael Powell,
> I will send him the details. 
> Open Problems:
> Some checkstyle violations because of the original Fortran source:
> - Original method comments were copied - doesn't follow javadoc standard
> - Multiple variable declarations in one line as in the original source
> - Problems related to "goto" conversions:
>   "gotos" not convertible in loops were transated into a finite automata (switch statement)
> 	"no default in switch"
> 	"fall through from previos case in switch"
> 	which usually are bad style make no sense here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message