commons-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From annaykay <>
Subject [math] Optimization: Nelder-Mead and Levenberg-Marquardt
Date Wed, 23 Nov 2011 10:54:51 GMT
Hello everyone, 
I am working on the optimization of some model parameters in my simulation,
which simulates the impact of communication on the attitude towards a
specific topic.
I want to optimize 17 parameters in a non-linear function to get a minimal
error-value. Therefor I implemented two different Optimization-Algorithms in
my Simulation. 
First I tried the Nelder-Mead Algorithm. But in this case I have the
problem, that first my error-value increases, then decreases again and
starts to stagnate on a non-satisfying value. Is this even possible for the
Nelder-Mead method, that the error increases however I want to minimize it?
Then I also tried a different Optimization-Algorithm, the
Levenberg-Marquardt-Algorithm. Here the problem is that the changes in the
parameters are too small, so that the optimization already stops after one
Do you maybe have an idea about approaching this problem or do you know a
different Optimization-Algorithm that could suit my problem?

Thanks in advance!

View this message in context:
Sent from the Commons - User mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message