On Nov 30, 2012, at 12:52 PM, Luc Maisonobe <Luc.Maisonobe@free.fr> wrote:
> Hi all,
>
> Le 30/11/2012 17:33, Konstantin Berlin a écrit :
>> As a user of the optimization algorithms I am completely confused by
>> the change. It seems different from how optimization function are
>> typically used and seems to be creating a barrier for no reason.
>
> The reason is that the framework has been done for several uses, not
> only optimization.
>
This is the part that confuses me. Why are you adding this complexity layer to optimization
framework, specially when this is completely nonstandard way to interface with it? If you
want some fancy framework for differentiation why not created a wrapper function? I don't
really understand it exactly without better documentation, but potentially this could create
an overhead also. If you are doing optimization inside optimization with millions of evaluation
functions, this could potentially slow things down than just a direct call to the function
value and the Jacobian. From what I can tell you generate an object for every evaluation.
>>
>> I am not clear why you can't just leave the standard interface to an
>> optimizer be a function that computes the value and the Jacobian (in
>> case of leastsquares), the gradient (for quasiNewton methods) and
>> if you actually have a full newton method, also the Hessian.
>>
>> If the user wants to compute the Jacobian (gradient) using finite
>> differences they can do it themselves, or wrap it into a class that
>> you can provide them that will compute finite differences using the
>> desired algorithm.
>
> This is already what many people do, and it can be done with both the
> older and the newer API. Nothing prevents users to use finite
> differences in the objects they pass to the optimizer.
>
>>
>> Also I can image a case when computation of the Jacobian can be sped
>> up if the function value is known, yet if you have two separate
>> functions handle the derivatives and the actual function value. For
>> example f^2(x). You can probably derive some kind of caching scheme,
>> but still.
>>
>> Maybe I am missing something, but I spend about an hour trying to
>> figure out how change my code to adapt to your new framework. Still
>> haven't figured it out.
>
> I can easily understand. It is really new and needs some polishing and
> documenting. I am sorry for that.
>
> In your case, if you already have two different functions, you can merge
> them to create a MultivariateDifferentiableVectorFunction and pass this
> to the optimizer. See how
> FunctionUtils.toMultivariateDifferentiableVectorFunctiontoMultivariateDifferentiableVectorFunction
> does it, starting from a DifferentiableMultivariateVectorFunction.
>
> See below about the deprecation of this converter method, though.
>
> Note that the new API is simply another way to represent the same
> information. The former way was limited to first derivatives and was
> really awkward when multiple dimensions were involved (as you derive
> with respect to several variables, a real function becomes a vector
> function (a gradient), a vector function becomes a matrix function and
> it becomes quickly untrackable. If you start thinking about second
> derivative, it is worse. It was also difficult when you combine
> functions, for example if you compute f(u), it looks like a univariate
> function, but if you see that u = g(x, y, z), it is really a
> multivariate function. When computing differentials, you have some
> problems pushing all partial differentials du/dx, du/dy, du/dz to the
> df/du function, there is a bottleneck. The only solution people had was
> to do the composition outside by themselves. The new API avoids that, it
> doesn care if u is a simple canonical variable or by itself a function
> from some former computations.
>
Why would someone think about second derivatives in optimizations using finite differences?
This is not used, and not stable. If the user doesn't want to deal with implementation themselves,
they can use your differentiation framework as a wrapper. Why is this forced on to the user
in the optimization framework?

To unsubscribe, email: devunsubscribe@commons.apache.org
For additional commands, email: devhelp@commons.apache.org
