Hi.
> > [Please refer to
> > https://issues.apache.org/jira/browse/MATH519
> > for the context of this discussion.]
> >
> > Hi Luc.
> >
> > > Anyway, returning NaN or POSITIVE_INFINITY would work only with
> > some
> > > optimizers.
> >
> > Do you mean that it would fail with some optimization _algorithms_ or
> > some
> > unsafe _implementations_ of those algorithms?
>
> I think algorithms.
> Typically direct methods like NelderMead or Torczon's multidirectional method
> behave well with discontinuous functions. In fact, one often use penalty functions
> in the form of large additive constants to mimic constraints with such algotithms.
I thought that with "returning NaN or POSITIVE_INFINITY would work only with
some optimizers", you meant that the algorithm would fail (i.e. producing
NaN or generating an exception), but this comment seems to mean that such
algorithms will succeed whatever (because they "behave well with
dicontinuous functions").
If so, I don't understand the problem. The solution found might not be the
best one, but such a risk is often present with optimization.
> Clearly, this does not work with gradientbased methods like LevenbergMarquardt or
> simpler ones like steepest descent or conjugate gradients.
Fitting the test data with LevenbergMarquardt worked (with "value" and
"gradient" either returning "NaN" or "POSITIVE_INFINITY")...
[Admittingly it took about 17000 function evaluations; I don't know if such
a high number of evaluations is due to the data or because of "jumps" caused
by the special values...]
> >
> > In the former case, would "Double.MAX_VALUE" be OK?
>
> No. Gradientbased methods need smooth functions and using Double.MAX_VALUE
> would not even be continuous.
"MAX_VALUE" also works for the test case.
After all, any value that is sufficiently different from any of the possible
objective function values should have the effect of pushing the optimizer
away of bad parameters.
> > In the latter, wouldn't there be a way to make the implementations
> > behave
> > correctly in the face of those "special" values?
> >
> > > For simple bounds on estimated parameters, this can be done using
> > > intermediate variables and mapping functions, [...]
> >
> > Yes, but that would be slightly less efficient (because of additional
> > function calls).
>
> Yes, but for simple bounds this would not be too much (using a logarithm or exponential).
> For double bounds, one typically uses a scaled logit function.
>
> > If this is the best choice, I'll implement a "conversion" class (for
> > the
> > "simple" bound case).
>
> It is a simple intermediate solution, but certainly not a best solution.
> I don't if we should implement it because its simple or if we should already
> go all the way and implement properly constrained optimization. I'm leaning
> towards a complete solution.
This utility class would be certainly be useful because constraints are not
implemented currently. If one needs to use a CM optimizer for optimizing an
inherently constrained parameter (such as the eccentricity of an elliptic
orbit), one would need to implement the conversion functionality anyway.
Implementing constraints is a nontrivial feature. It would be nice to have,
but not at the cost of a further delay of the 3.0 release.
> [...]
Regards,
Gilles

To unsubscribe, email: devunsubscribe@commons.apache.org
For additional commands, email: devhelp@commons.apache.org
