commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bernhard Grünewaldt <>
Subject Re: [nabla] Contribution
Date Tue, 31 Mar 2009 10:29:17 GMT
> Bernhard Grünewaldt a écrit :
>> Hi Luc,
>> I took a look at nabla and I am impressed.
>> There are a lot of this Transformer classes that provide the bytecode
>> for
>> the internal work.
>> I don't really understand the full internals, but I think I have an
>> overview.
> If you feel the documentation is awkward or hard to understand, please
> tell me so. The ideas behind this project are not widespread. You can
> probably also have a lookt at

Ok, I think that will be the first thing I will do.
I think the documentation is well done but could have a better "big
picture" and some other "overview" pages. Furthermore detailed docu on the
internal work.

>> I want to write a Parser for the Mathematical Standard Notation (msn).
>> The first step would be to parse the msn to Java Code.
> Nabla goal for now is focused on transforming bytecode only. We do not
> start from source. The reason for this is that we want to be able to
> differentiate any kind of expression, even corresponding to a complete
> program with thousands of lines and loop, conditionals, calls ...
> The use case I have in mind is the following one. I write a program that
> compute the trajectory of a satellite taking into account many different
> effects (central term of the gravitational attraction of course, but
> also perturbations like other gravitational terms from the spherical
> harmonics development, drag, solar radiation pressure, luni-solar
> attraction, maneuvers ...). This will be a complex program with lots of
> subroutines I wrote, but also algorithms from libraries already written
> like for example an ODE solver in commons-math. Then, I want to use this
> program as the heart of an optimizer that will try to find the best set
> of maneuvers to reach some target orbit. I need the derivatives but the
> program is : 1) already written 2) split in several separate source
> files and 3) too complex to be differentiated by hand. The program was
> difficult to develop and validate, so I do not want to rewrite it in
> another language (msn or other), I want to keep it as is and not modify
> it at all, to prevent introducing an error. The compiler already has a
> powerful parser and has already analyzed the program and translated it
> in a "simple" representation: bytecode. So this is what I consider to be
> my source.

Ok sounds good. Now I fully understand the aim of the project.
So I will give up the idea of my parser and rather do something more
supportive for the project :)

>> Then use a on the fly compiler like asm or bcel to create a class and
>> instantiate it.
> We use asm in nabla. It seemed more powerful than bcel.

That's nice to hear. Asm is the one I prefer to.

>> Unless you give me some specific task I will try to write such a parser.
> There are two things I have in mind.
> The first one would be to complete the existing implementation to be
> able to handle appropriately loops (it almost works), use of
> intermediate variables, and calls to other functions. This is clearly
> missing now and we can support only very basic constructs, the ones that
> are in the unit tests.
> The second thing would be to add a completely new implementation using
> reverse mode. This is an impressive mode allowing to compute gradients
> very cheaply, even if the function to be differentiated has thousands of
> input parameters. I am currently reading a book referenced in the
> autodiff site
> and really want to support this mode in Nabla. The existing
> AutomaticDifferentiator would probably be renamed
> ForwardModeAutomaticDifferentiator and a new
> ReverseModeAutomaticDifferentiator also implementing the
> UnivariateDifferentiator interface would be needed.

Ok, I will take a look at these Classes and at the documentation at and then tell you my ideas.

> What we need now is thinking rather than coding. We should also design
> an interface for using a differentiator to produce gradients
> efficiently, and also look at the fact that sometimes one does not
> always want a complete jacobian J but a gradient in a specific direction
> J.dX, which according to the book is much cheaper to compute directly
> than to compute J and then multiplying by dX.
> Does this makes sense to you ?

It seems to make since, since I don't understand what you are talking
about ;)
But I will dive deep into my math books and then will give you my answer.

So the first thing I will do is to extend the documentation.

> Luc
>> cu
>> - Bernhard

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message