commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gilles <>
Subject Re: [Math] LeastSquaresOptimizer Design
Date Fri, 25 Sep 2015 11:55:32 GMT
On Thu, 24 Sep 2015 21:41:10 -0500, Ole Ersoy wrote:
> On 09/24/2015 06:01 PM, Gilles wrote:
>> On Thu, 24 Sep 2015 17:02:15 -0500, Ole Ersoy wrote:
>>> On 09/24/2015 03:23 PM, Luc Maisonobe wrote:
>>>> Le 24/09/2015 21:40, Ole Ersoy a écrit :
>>>>> Hi Luc,
>>>>> I gave this some more thought, and I think I may have tapped out 
>>>>> to
>>>>> soon, even though you are absolutely right about what an 
>>>>> exception does
>>>>> in terms bubbling execution to a point where it stops or we 
>>>>> handle it.
>>>>> Suppose we have an Optimizer and an Optimizer observer.  The 
>>>>> optimizer
>>>>> will emit three different events given in the process of stepping
>>>>> through to the max number of iterations it is allotted:
>>>>> - END (Max iterations reached)
>>>>> So we have the observer interface:
>>>>> interface OptimizerObserver {
>>>>>      success(Solution solution)
>>>>>      update(Enum enum, Optimizer optimizer)
>>>>>      end(Optimizer optimizer)
>>>>> }
>>>>> So if the Optimizer notifies the observer of `success`, then the
>>>>> observer does what it needs to with the results and moves on.  If 
>>>>> the
>>>>> observer gets an `update` notification, that means that given the
>>>>> current [constraints, numbers of iterations, data] the optimizer 
>>>>> cannot
>>>>> finish.  But the update method receives the optimizer, so it can 
>>>>> adapt
>>>>> it, and tell it to continue or just trash it and try something
>>>>> completely different.  If the `END` event is reached then the 
>>>>> Optimizer
>>>>> could not finish given the number of allotted iterations. The 
>>>>> Optimizer
>>>>> is passed back via the callback interface so the observer could 
>>>>> allow
>>>>> more iterations if it wants to...perhaps based on some metric 
>>>>> indicating
>>>>> how close the optimizer is to finding a solution.
>>>>> What this could do is allow the implementation of the observer to 
>>>>> throw
>>>>> the exception if 'All is lost!', in which case the Optimizer does 
>>>>> not
>>>>> need an exception.  Totally understand that this may not work
>>>>> everywhere, but it seems like it could work in this case.
>>>>> WDYT?
>>>> With this version, you should also pass the optimizer in case of
>>>> success. In most cases, the observer will just ignore it, but in 
>>>> some
>>>> cases it may try to solve another problem, or to solve again with
>>>> stricter constraints, using the previous solution as the start 
>>>> point
>>>> for the more stringent problem. Another case would be to go from a
>>>> simple problem to a more difficult problem using some kind of
>>>> homotopy.
>>> Great - whoooh - glad you like this version a little better - for a
>>> sec I thought I had complete lost it :).
>> IIUC, I don't like it: it looks like "GOTO"...
> Inside the optimizer it would work like this:
> while (!done) {
>    if (can't converge) {
>        observer.update(Enum.CANT_CONVERGE, this);
>    }
> }

That's fine. What I don't like is to have provision for changing the
optimizer's settings and reuse the same instance.
The optimizer should be instantiated at the lowest possible level; it
will report everything to the observer, but the "report" is not to be
confused with the "optimizer".

> Then in the update method either modify the optimizer's parameters or
> throw an exception.

If I'm referring to Luc's example of a high-level code "H" call to some
mid-level code "M" itself calling CM's optimizer "CM", then "M" may not
have enough info to know whether it's OK to retry "CM", but on the 
hand, "H" might not even be aware that "M" is using "CM".

As I tried to explain several times along the years (but failed to
convince) is that the same problem exists with the exceptions: however
detailed the message, it might not make sense to the person that reads
the console because he is at level "H" and may have no idea that "CM"
is used deep down.
Having a specific exception which "M" can catch, extract info from, and
raise a more meaningful exception (and/or translate the message!) is a
much more flexible solution IMO.

>>> Note to seeelf ... cancel
>>> therapy with Dr. Phil.  BTW - Gilles - this could also be used as a
>>> light weight logger.
>> I don't like this either (reinventing the wheel).
> You still want me to go and see Dr. Phil? :)

I just wish that we are allowed to use slf4j directly within CM.
Is there any reason to go through hoops in order to offer this facility
to users and developers?

[Well, if all iterative algorithms are rewritten within the "observer"
paradigm, then the logging can indeed be left at the caller's level 
the optimizer will report "everything"...  Going that route is an 
to be mentioned in issue of allowing "slf4j" or not (see below).]

>>> The Optimizer could publish information deemed
>>> interesting on each ITERATION event.
>> If we'd go for an "OptimizerObserver" that gets called at every
>> iteration,
>> there shouldn't be any overlap between it and "Optimizer":
> So inside the Optimizer we could have:
> while (!done) {
>     ...
>     if (observer.notifyOnIncrement())
>     {
>         observer.increment(this);
>     }
> }
> Which would give us an opportunity to cancel the run if, for example,
> it's not converging fast enough.

Providing ways to assess "too slow convergence" would be a very
interesting feature, I think.

> In that case we set done to true in
> the observer, and then allow the Optimizer to get to the point where
> it checks if it's done, calls the END notification on the observer,
> and then the observer takes it from there.
>> iteration limit should be dealt with by the observer, the iterative
>> algorithm would just run "forever" until the observer is satisfied
>> with the current state (solution is good enough or the allotted
>> resources - be they time, iterations, evaluations, ... - are
>> exhausted).
> It's possible to do it that way, although I think it's better if that
> code stays on the algorithm such that the Observer interface (The
> client / person using CM implements the Observer) is as simple as
> possible to implement.

By definition, the iteration concept is also present in the "Observer".
(via "notifyOnIncrement()", IIUC).
If the observer is notified, it should act according to the caller's
policy (e.g. call "optimizer.stop()").
[Since the optimizer was stopped before completing the assignment (vs
finding a solution within the tolerance settings), it should not be in
charge of further action (e.g. "return" something).]

>>> The observer could then be wired
>>> with SLF4J and perform the same type of logging that the Optimizer
>>> would perform.  So CM could declare SLF4J as a test dependency, and
>>> unit tests could log iterations using it.
>> As a "user", I'm interested in how the algorithms behave on my 
>> problem,
>> not in the CM unit tests.
> You could still do that.  I usually take my problem, simplify it down
> to a data set that I think covers all corner cases, and then run it
> through my unit tests while looking at the logging output to get an
> idea of how my algorithm is behaving.

When you "simplify", you don't the see how the (production) code really
Not even mentioning that it takes a lot of time to "simplify", and 
be impossible (e.g. if the production code runs in another 

>> The question remains unanswered: why not use slf4j directly?
> FWIU class path dependency conflicts for SLF4J are easily solved by
> excluding logging dependencies that other libraries bring in and then
> directly depending on the logging implementation that you want to 
> use.
> So people do run into issues, but I think they are solvable:

Then, could you please raise the question in a separate thread?

>>> Lombok also has a @SLF4J annotation that's pretty sweet.  Saves the
>>> SLF4J boilerplate.
>> I understand that using annotations can be a time-saver, but IMO not
>> so much for a library like CM; so in this case, the risk of 
>> depending
>> on another library must be weighed against the advantages.
> Lombok is compile time only, so there should be few drawbacks:

Yes, I've just been wondering about that.
So, could you please raise the question in a separate thread?

> I'll demo it on the LevenbergMarquardtOptimizer experiment, and we
> can see the level of code reduction we are able to achieve.  I think
> it's going to be fairly significant.



> Cheers,
> - Ole

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message