MaxSA Simulated Annealing package

Chapter contents:

Overview Abstract References
GetMaxSAControl GetMaxSAControlEps GetMaxSAControlStep
MaxSA MaxSAControl MaxSAControlEps
MaxSAControlStep MaxSAConvergenceMsg

Update

The package was updated in January 2008. From now on, the package should be imported instead of included, so now use #import <packages/maxsa/maxsa> at the start of your program. This provides better compatibility with the G@RCH package.

Overview

The MaxSA package provides an implementation in Ox of the Simulated Annealing algorithm for optimizing non-smooth functions with possible multiple local maxima.

The implementation follows exactly the implementation of Goffe, William L., Gary D. Ferrier, and John Rogers, who provide programs in Fortran and Gauss.

The following abstract and description are copied straight from the original Fortran code, adapting only Goffe's e-mail address:

Abstract

Simulated annealing is a global optimization method that distinguishes between different local optima. Starting from an initial point, the algorithm takes a step and the function is evaluated. When minimizing a function, any downhill step is accepted and the process repeats from this new point. An uphill step may be accepted. Thus, it can escape from local optima. This uphill decision is made by the Metropolis criteria. As the optimization process proceeds, the length of the steps decline and the algorithm closes in on the global optimum. Since the algorithm makes very few assumptions regarding the function to be optimized, it is quite robust with respect to non-quadratic surfaces. The degree of robustness can be adjusted by the user. In fact, simulated annealing can be used as a local optimizer for difficult functions.

This implementation of simulated annealing was used in "Global Optimization of Statistical Functions with Simulated Annealing," Goffe, Ferrier and Rogers, Journal of Econometrics, vol. 60, no. 1/2, Jan./Feb. 1994, pp. 65-100. Briefly, we found it competitive, if not superior, to multiple restarts of conventional optimization routines for difficult optimization problems.

For more information on this routine, contact its author: Bill Goffe, goffe@oswego.edu

References

Goffe, William L., Gary D. Ferrier, and John Rogers (1994). Global Optimization of Statistical Functions with Simulated Annealing. Journal of Econometrics, 60(1/2):65-99.

Fortran code available as http://emlab.berkeley.edu/Software/abstracts/goffe895.html

Function reference

GetMaxSAControl, GetMaxSAControlEps, GetMaxSAControlStep

#import <packages/maxsa/maxsa> GetMaxSAControl(); GetMaxSAControlEps(); GetMaxSAControlStep();
Return value
Return an array with two, two or five values respectively.
GetMaxSAControl returns { mxEval, iPrint }.
GetMaxSAControlEps returns { dEps, iNEps }.
GetMaxSAControlStep returns { iNS, iNT, dRT, vM, vC }.
See also
MaxSAControl, MaxSAControlEps, MaxSAControlStep

MaxSA

#import <packages/maxsa/maxsa> MaxSA(const func, const avP, const adFunc, const adT); MaxSA(const func, const avP, const adFunc, const adT, const vLo, const vHi);
func
in: a function computing the function value
avP
in: address of p x 1 matrix with starting values
out: p x 1 matrix with final coefficients
adFunc
in: address
out: double, final function value
adT
in: address of scalar with initial temperature
out: double, final temperature
vLo
in: optional, p x 1 vector of lower bounds of parameters
vHi
in: optional, p x 1 vector of upper bounds of parameters

The supplied func argument should have the following format:
func(const vP, const adFunc, const avScore, const amHessian);
vP
in: p x 1 matrix with coefficients
adFunc
in: address
out: double, function value at vP
avScore
in: always 0 for MaxSA, as it does not need the score;
amHessian
in: always 0 for MaxSA, as it does not need the Hessian;
returns
1: successful, 0: function evaluation failed

Return value
Returns the status of the iterative process:
MAXSA_CONV
Strong convergence The convergence test was passed, using tolerance eps for the last NEps function evaluations.
MAXSA_MAXEV
No convergence (maximum no of function evaluations reached)
MAXSA_FUNC_FAIL
No convergence (function evaluation failed)
MAXSA_TEMP
No convergence (initial temperature negative)
MAXSA_NOCONV
No convergence Probably not yet attempted to maximize.

An initial temperature of

T ≤ 0
leads to immediate convergence;
0 ≤ T ≤ `small'
leads to an algorithm starting off with little freedom to find a maximum further away from the initial location, might not be ideal;
T > `small'
may lead to sensible convergence. A value of eg T = 5 leaves some room for the algorithm to search for the optimum before cooling down to a maximum.

The temperature reduction factor dRT is set using MaxSAControlStep and governs the speed with which the temperature is reduced. This greatly influences the convergence behaviour of the algorithm. Do try out different values for it.

See for further explanation the text in the example simann.ox, and a general article on simulated annealing.

The chosen default values for the tolerances are:

eps=1e-6, NEps=4.

See also
MaxSAControl, MaxSAControlEps, MaxSAControlStep, MaxSAConvergenceMsg

Examples
See packages/maxsa/samples/simann.ox.

MaxSAControl, MaxSAControlEps

#import <packages/maxsa/maxsa> MaxSAControl(const mxEval, const iPrint); MaxSAControlEps(const dEps, const iNEps);
mxEval
in: int, maximum number of function evaluations; default is 100.000, use -1 to leave the current value unchanged
iPrint
in: int, print no (0), few (1) or many (2) intermediate results; default is 0, use -1 to leave the current value unchanged
dEps
in: double, eps, default is 1e-6, use ≤ 0 to leave the current value unchanged. Specifies tolerance between last iNEps and present result.
iNEps
in: integer, default is 4, use ≤ 0 to leave the current value unchanged. Specifies history of results to compare to present function value.
Return value
No return value.
See also
GetMaxSAControl, GetMaxSAControlEps, MaxSA

MaxSAControlStep

#import <packages/maxsa/maxsa> MaxSAControlStep(const iNS, const iNT, const dRT, const vM, const vC);
iNS
in: integer, number of cycles. After iNS x p function evaluations, each element of VM is adjusted so that approximately half of all function evaluations are accepted. The suggested value is 20.
iNT
in: integer, Number of iterations before temperature reduction. After NT x NS x p function evaluations, temperature (T) is changed by the factor RT. Value suggested by Corana et al. is MAX(100, 5*N). See Goffe et al. for further advice. Value of M_NAN leads to computation of MAX(100, 5*N) number of cycles.
dRT
in: scalar, the temperature reduction factor. The value suggested by Corana et al. is .85. See Goffe et al. for more advice.
vM
in: p x 1 matrix with step length vector used in initial step. Algorithm adjusts vM automatically, such that an incorrect initial value gets adapted.
vC
in: p x 1 matrix or scalar with step length adjustment. The suggested value for all elements is 2.0.
Return value
No return value.

The chosen default values can be used, but might be suboptimal. Experimenting with the values can lead to better convergence behaviour, depending on the problem at hand. When the routine MaxSAControlStep is not called, a warning is printed on the screen.

Especially the temperature reduction factor dRT is important as it governs the speed with which the system is `cooled down'. This greatly influences the convergence behaviour of the algorithm. The default value is 0.85, but do try out different values for it.

See also
GetMaxSAControlStep, MaxSA

MaxSAConvergenceMsg

#import <packages/maxsa/maxsa> MaxSAConvergenceMsg(const iCode);
iCode
in: int, code returned by MaxSA
Return value
Returns the text corresponding to the convergence code listed under the return values of MaxSA.
See also
MaxSA


MaxSA version 1.0c. Original Maximization © JA Doornik
Changes made on 5-December-2006 for implementing MaxSA statements by CS Bos