Subsections


4 Portable Fortran Version (GMF)

1 Introduction

The early Global Minimizer was in Fortran (GMF). Therefore, we describe the Fortran version as an introduction to later versions. The GMF system is to minimize a continuous function

\begin{eqnarray}
f(x),\ x=(x_1, \ldots,
x_n)
\end{eqnarray}


where $x\in A\subset R^n$ [Mockus et al., 1997]. It is assumed for most methods that the set $A$ is a rectangular parallelepiped

\begin{eqnarray}
A=\big\{ z: a_i \leq
x_i \leq b_i,\quad i=2, \ldots, n\}.
\end{eqnarray}


Other constraints are approximately reduced to (4.2) by the penalty function techniques.

A feature of all versions of the Global Minimum software, including GMF, is the Bayesian approach. The Bayesian methods are optimal, in the sense of an average deviation. Here the next observation is defined by minimizing the risk function $R(x)$. The risk is an expected deviation from the global minimum. The minimization of the risk function is justified, if the objective function $f(x)$ needs considerable computing. Otherwise, it could be better to do more observations, which need not to be planned optimally. In such a case, a good idea is to use a uniform search, for example, of LP type [Sobolj, 1967]. Using the LP-search we lose some efficiency of observations but still have the convergence. However, the LP-search can be too expensive, if observations are very cheap. Then we apply the uniform random (Monte Carlo) search (Monte Carlo) which is very simple. Using the Monte Carlo we obtain the convergence only in probability. The LP-search (LPMIN) and the uniform random search (MIG1 and MIG2) algorithms. Both these algorithms are in GMF.

The method of clustering [Torn, 1990] is included as a good heuristic technique under the title GLOPT. It usually works well, if the number of local minima is small and known in advance, and if the "attraction area" of the global minimum is large. GLOPT is comparable to LPMIN by the computing time.

The global line search EXKOR is a "semi-global" method designed for approximately "additive" functions. EXKOR is convenient for visualization. One see clearly how values of the objective function depend on variables. That is important for preliminary investigations.

Global methods search the whole area, as usual. Otherwise, one can miss the global minimum. Obviously, the global search is not the most efficient way for local optimization. Therefore, in GM the global search is completed by the local one, as usual. The best result of a global search is defined by default as an initial point for a local search.

Methods of variable metrics type are widely used for local optimization of continuously differentiable functions without noise with non-linear constraints. The local mathod, called Recursive Quadratic Programming [Biggs, 1975], is included under the name REQP[*].

A specific Variable Metrics version for rectangular constraints [Tiesis, 1975] is called MIVAR4. For local optimization of non-differentiable functions with non-linear constraints the simplex type method [Himmelblau, 1972] is used under the name FLEXI.

Sometimes, one expects that the influence of some variables and their groups is considerably greater than that of others. Then the method LPMIN should be used for analysis of the structure before we start the regular optimization. The LPMIN orders variables by their "importance" using a sort of variance analysis techniques [Saltenis, 1989].

LBAYES is a stochastic approximation type algorithm with the
"Bayesian" step length control. It is designed for local optimization of uni-modal functions with a noise.

In GMF, no machine-dependent routines are used. Therefore, the software can be adapted to any computer with a standard FORTRAN compiler by adjusting standard machine-dependent constants.

2 General Discussion

1 Parameters

The parameters X, A, B, N, FM, IPAR, PAR, IPA, IPAA are used in all subroutines. Here
X is an array of length N which defines the coordinates of a point being considered (initial, optimal or current),
A is an array of length N which defines the lower bounds of X,
B is an array of length N which defines the upper bounds of X,
N is the number of variables (dimension of X) usually N$\leq 20$,
FM is the value of the function FI(X, N) at the point X,
IPAR is the array of length 30 which defines the real control parameters,
IPA is a shift of integer control parameters,
IPAA is a shift of real control parameters. If only one method is used, then both shifts are zero: $\hbox{IPA}=0$ and $\hbox{IPAA}=0$. If several methods are used sequentially, then a shift for the next method must be equal to the sum of numbers of control parameters used before by other methods. The number of control parameters of different methods is given in Table 4.1. For all the methods the first integer control parameter is the printing parameter: IPAR(IPA+1)=IPR
(IPA=0 if only one method is used). If $\hbox{IPR}<0$,
then only diagnostic messages are printed.
If $\hbox{IPR}=0$,
then the initial data, the final results and diagnostic messages are printed.
If $\hbox{IPR}>0$,
then not only those but also the results of each IPR-th iteration are printed. The meaning of other control parameters will be explained when describing the corresponding subroutines.


2 List of Methods

Table 4.1 shows parmeters to combine methods into different sequences

Table 4.1: Table of Parameters.
No Name Method truecm No. of control parameters
    Integer IPAR Real PAR
         
Global optimization with rectangular constants
         
1 BAYES1 Bayesian 3 0
2 UNT Extrapolation 4 0
3 LPMIN Uniform deterministic N+3 0
4 GLOPT Clustering 3 0
5 MIG1, MIG2 Uniform random 2 0
6 EXTR Bayesian one-dimensional 3 2
7 EXKOR Extension of EXTR to    
    multi-dimensional case 5 N+1
         
Local optimization
         
8 MIVAR4 Variable metric with rectangular    
    constants 4 4
9 REQP Variable metrics with non-linear    
    constraints 4 4
10 FLEXI simplex with non-linear constraints    
      4 2
         
Local optimization with noise
         
11 LBAYES stochastic approximation with    
    Bayes step length and    
    rectangular constraints    
      3 2
         
         

Table 4.1 shows that to set the algorithm parameters one performs many adjustments. This task is easy using "ready-made" examples. They include sequences of different algorithms. One selects needed algorithms just by "commenting-out" the redundant ones. For example, the file 'gmfl/ex8.f' in the enclosed disk includes seven algorithms named BAYES1, MIVAR4, LBAYES, FLEXI, REQP, UNT, GLOPT. The file 'gmfl/ex9.f' includes only one algorithm EXKOR. The software is on web-sites (see section 4). Methods are described in [Mockus et al., 1997] and [Mockus, 1989a].

3 Program Description


1 Common Blocks

Values of the function FI(X, N) are in the array /BS1/Y(100). The array of variables X is defined by the array XN of length MN=N*M.
In the array /STATIS/IFAIL, IT, IM, M:
IFAIL is a control indicator:
- if the initial data is not correct, then $\hbox{IFAIL}=10$ and return to the main program,
- if the initial data is correct, then $\hbox{IFAIL}\neq 10$ and shows the number defining the stopping rule,
IT is the number of iterations,
IM is the number of the optional iterations,
M is the number of function evaluations (observations).


2 Objective Functions and Constraints

The function to be minimized should be represented as a real function FI(X,N). In most methods, only the lower and upper bounds are fixed by arrays A and B.
In methods with non-linear constraints, the subroutine
CONSTR (X, N, G, MC) is used. Here
G is a one-dimensional array of length MC that defines the constraints at the point X
MC is the number of constraints.

It is well known that local methods of optimization are sensitive to the scales of variables, as usual. The parameters of local methods in GMF are adjusted to the case when $A(I)=-1$ and $B(I)=1,\ I=1,N.$, as usual. Therefore, it is convenient to reduce the rectangular constraints $A\leq X\leq B$ to the $N$-dimensional hypercube $[-1, 1]^N,$. This can be done by the following reduction formulae[*]).

\begin{displaymath}X(I)={2\over B(I)-A(I)} X0(I)-{B(I)+A(I)\over B(I)-A(I)},\
I=1,N\ .\end{displaymath}

Here $X0$ are original variables, and $X$ are variables scaled to fit into $[-1, 1]^N$.

1 Example

In most of the following examples, this test function $f(x)$[*] is used

\begin{eqnarray}
f(x)=2/N\sum^N_{i=1} \big (x^2_i-\cos (18 x_i)\big).
\end{eqnarray}


Here $N=2,\ x_1\in [-0.25;\ 0.5],\ x_2\in [-0.125;\ 0.625]\ .$ The subroutine is FURASN.
Figure 4.1: Objective Function FURASN
\begin{figure}\begin{codebox}{4.8in}
\begin{verbatim}
FUNCTION FURASN(X,N)
D...
...XI)
AN=NN
FURASN=F*(2./AN)
RETURN
END\end{verbatim}\end{codebox}\end{figure}

3 Main Program

The main program defines the input data and the sequence of methods. At the beginning, one selects ( using Table 4.1 or the ready-made example 'gmfl/ex8.f'), a desirable sequence of methods of optimization and analysis. Then a function FI(X, N) is written that evaluates the objective function at a fixed point X.

If needed, the subroutine CONSTR(X, N, G, MC) is included. It evaluates the constraints at the point X. The length of arrays depends on the subroutines, as usual. The exception is the arrays IPAR and PAR. The length of these arrays is always thirty. The parameters of methods are included in the arrays IPAR and PAR by the given sequence of methods. Formal parameters IPA and IPAA are fixed using the rules that are given in the previous section. In the case, when only one method is used, $\hbox{IPA}=\hbox{IPAA}=0$.


4 Example of the Main Program

The global minimum of test function (4.3) is determined using the global Bayesian method BAYES1. Then the results of the global search are corrected by the local method of variable metrics MIVAR4.
The test function is represented as the function FURASN (X, N).
The arrays are: X, A, B, XN, HES, IPAR, PAR

It follows from Table 4.1 that, in the subroutine BAYES1, there are three integer parameters. In the subroutine MIVAR4, four integer parameters are used. In MIVAR4 four real parameters should be defined, too. This means that seven elements of IPAR and four elements of PAR should be fixed.
The main program is in Figure 4.2

Figure 4.2: Example of MAIN
\begin{figure}\begin{codebox}{4.8in}
\begin{verbatim}
program main
dimension...
...mension x(n)
fi=furasn(x,n)
return
end\end{verbatim}\end{codebox}\end{figure}


5 Installation

1 MS DOS version of GMF

Installing: This version was tested using the Digital Research Fortran-77 Version 4.1 and LINK-86 Linkage Editor Version 1.5f, Digital Research, Inc. The example: Here the file 'ex9.for' is an example of the Main program using EXKOR method to minimize FURASN function (see expression 4.3).
The file 'exkor.for' is the source of EXKOR.
The file 'i1mach1.for' contains portability codes and other service routines, such as independent random number generator, various test functions, etc.

2 Linux version of GMF

Installing: This version was tested using the standard Linux f77-style shell script 'f77' to compile and load Fortran and assembly codes. The example: Note, that here the Fortran file extensions are 'f'.
jonas mockus 2004-03-03