12 "Duel" Problem,
Differential Game Model

1 Introduction

In all three competition models, the Nash, the Walras, and the Inspector, the steady-state conditions were considered, for simplicity. In real life competition processes, dynamics are essential. Dynamical competition models may be represented as differential games. They are difficult for numerical and visual analysis, as usual [Friedman, 1971,Galperin and Zheng, 1991,Hajek, 1975,Isaacs, 1965,Krasovski, 1970,von Neumann and Morgenstern, 1953,Petrosyan, 1985,Petrosyan, 1993,Rodin, 1987]. Therefore, one tries to make the dynamic competition models as simple as possible.

Consider, as a simple illustration, two objects in space trying to destroy each other. Assume, as a first approximation, that the control parameters are: initial points $z_0,w_0$, "climbing" rates $a,b$ and firing times $t_1,t_2$. These parameters are set before the start for both objects. Denote the control parameters by vectors $x=(x_i,\
i=1,2,3)$ and $y=(y_i, \ i=1,2,3)$, correspondingly. Denote by $(a_i^x,a_i^y)$ the lower bounds of these parameters. Denote by $(b_i^x,b_i^y)$ the upper bounds. In the illustrative example $x_1=z_0,x_2=a,x_3=t_1$, and $y_1=w_0,y_2=b,y_3=t_2$, where $-1
\le z_0 \le 1,-1 \le w_0 \le 1, -1 \le a \le 1, -1 \le b \le 1$. Movements of the objects are described in two-dimensional "hight-time" space by the first-order differential equations

\begin{eqnarray}dz(t)/dt &=&a z(t),
\\ dw(\tau)/d\tau &=& b w(\tau), \

The solution of these equations defines trajectories of the objects

\begin{eqnarray}z(t) =z_0 e^{at},
\\ w(\tau)= w_0 e^{bt}.

As usual, models of moving objects use homogenuos second-order differential equations, at least

\begin{eqnarray}d^2z(t)/dt^2+g z(t)/dt &=&a z(t),
d^2 w(t)/dt^2+h
dw(\tau)/d\tau &=& b w(\tau), \ \tau=2-t.

The solution of these equations defines more realistic trajectories of the objects because this includes acceleration, too. These trajectories depend on the roots $r_1,r_2$ of the charasteristic equation



\begin{eqnarray}r_1= -g/2 + \sqrt {(g/2)^2+a},\ r_2= -g/2 - \sqrt {(g/2)^2+a}.

The trajectory of the first object

\begin{eqnarray}z(t)&=&\cases {C_1 e^{r_1 t} + C_2 e^{r_2 t}, &\ if $r_1 \ne r_2...
...\beta t), &\ if $r_1= -g/2 + \beta i, r_2= -g/2 - \beta i$/. \cr}

Here $\beta=\sqrt {-a-(g/2)^2}$.

The constants $C_1,C_2$ are defined by initial or boundary conditions. Suppose that the initial higth $z(0)=z_0$ are fixed by operator of the first flying object. Then

\begin{eqnarray}z_0&=&\cases {C_1 e^{r_1 t} + C_2 e^{r_2 t}=C_1+C_2, &\ if $r_1 ...
...C_1, &\ if\ $r_1= -(g/2) + \beta i, r_2= -(g/2) - \beta i$\ \cr}.

Considering the speed $dz(t)/dt$ one can write

\begin{eqnarray}dz(t)/dt= \nonumber\\
C_1 r_1 e^{r_1 t} + C_2 r_2 e^{r_2 t}, if\ r_1 \ne r_2

\begin{eqnarray}dz(t)/dt= \nonumber \\
-(g/2)(C_1 + C_2 t) e^{-(g/2) t} + C_2 e^{-(g/2) t}, if\ r_1 = r_2\\

\begin{eqnarray}dz(t)/dt= \nonumber \\
-(g/2)e^{-g/2 t}(C_1 cos \beta t + C_2 s...
... t)
\nonumber \\
if r_1= -(g/2) + \beta i, r_2= -(g/2) - \beta i

From here assuming that initial speed $dz(0)/dt=z^1_0$

\begin{eqnarray}z^1_0&=&\cases {C_1 r_1 + C_2 r_2, &\ if $r_1 \ne r_2$\ \cr
... C_2, &\ if\ $r_1= \alpha + \beta i, r_2= \alpha - \beta i$\ \cr}

This way we obtained three pairs of linear equations corresponding to three different pairs of roots $r_1,2$ defining two unknown constants $C_1,C_2$.
The trajectory of the second object $w(t)$ is defined in the same way.

Later we shall apply first-order linear differential equations (12.1) and (12.2) as a first approximation to test algorithms of optimization and modeling. Denote by $d(t)$ a distance between objects at the moment $t$

\begin{eqnarray}d(t) =\vert\vert(w(t),\tau)-(z(t),t)\vert\vert.

Denote by $p(t)$ the hitting probability

\begin{eqnarray}p(t) =1- (d(t)/D)^{\alpha}.

Here $D \ge \max_t d(t)$ and $\alpha >0$. The expected utility function of the first object

\begin{eqnarray}U(x,y)&=&\cases {u_1 p(x_3)-u_2 (1-p(x_3)) p(y_3), &if $x_3 <
..., &if $x_3 > y_3$, \cr u_1
p(x_3)-u_2 p(y_3), &if $x_3=y_3$,\cr}

The expected utility function of the second object

\begin{eqnarray}V(x,y)&=&\cases {v_1 p(y_3)-v_2 (1-p(y_3)) p(x_3), &if $x_3> y_3...
&if $x_3 < y_3$, \cr v_1 p(y_3)-v_2
p(x_3), &if $x_3=y_3$.\cr}

Expressions (12.18) and (12.19) of expected utilities follows from expression (12.17) of hitting probabilities. It is assumed that utilities to hit the hostile object are plus $u_1$ and $v_1$. Utilities to be hit are minus $u_2$ and $v_2$. Here $u_1,u_2$ denote utilities of the first object and $v_1,v_2$ define utilities of the second one.

2 Convex Version

Expected utilities (12.18) and (12.19) are not convex functions of variables $p(x_3)$ and $p(y_3)$. The convex version may be obtained assuming that the object destroys the enemy, if it is not hit itself. Suppose that $u_1=u_2=v_1=v_2=1$. Then it follows from expressions (12.18) and (12.19) that the expected utility function of the first object

\begin{eqnarray}U(x,y)&=&\cases {p(x_3)-(1-p(x_3)), &if $x_3 < y_3$, \cr - p(y_3...
...p(y_3)), &if $x_3 > y_3$, \cr
p(x_3)-p(y_3), &if $x_3=y_3$,\cr}

The expected utility function of the second object

\begin{eqnarray}V(x,y)&=&\cases {p(y_3)-(1-p(y_3)), &if $x_3 > y_3$, \cr - p(x_3...
...(x_3)), &if $x_3 < y_3$, \cr
p(y_3)-p(x_3) , &if $x_3=y_3$.\cr}

Expected utilities (12.18) and (12.19) are convex functions of probabilities $p(x_3)$ and $p(y_3)$ at the equilibrium point

\begin{eqnarray}p(x_3)=0.5, \nonumber \\ p(y_3)=0.5.

The convexity of expected utilities (12.18) and (12.19) as functions of probabilities $p(x_3)$ and $p(y_3)$ does not provide their convexity as functions of firing times $x_3$ and $y_3$ and other parameters. The convexity is a part of sufficient equilibrium conditions. Therefore, the equilibrium may not exist.

Expected utilities reach the maximal values at the ends of intervals of parameters $z_0,a,w_0,b$, as usual. For the first object there are four pure strategies corresponding to various arrangements of these ends
_{min}),\\ (z_0=z_{max},a=a_{max})$.
The pure strategies for the second object:
_{min}),\\ (w_0=w_{max},b=b_{max})$.
We may search for the pure equilibrium of bimatrix games (12.20)(12.21) corresponding to each fixed pair $(x_3.y_3)$ of firing times using simple discrete algorithm (13.14). However, the probability to obtain the pure equilibrium is high only for large bimatrix games [Forgo et al., 1999]. Therefore, we consider mixed strategies, first.

3 Mixed Strategies

If expected utilities reach the maximal values at the ends of intervals of parameters $z_0,a,w_0,b$ then mixed strategies could be helpful. The mixed strategies mean that each object chooses at random a combination of lower and upper bounds. Define those bounds as zero-ones for simplicity. There are four events:
$(z_0=0,a=0), (z_0=0,a=1), (z_0=1,a=0),
(z_0=1,a=1)$. Probabilities of these events are defined by first four components
x(i) \ge 0,\ \sum_{i=1}^4 x(i)=1$ of a vector
$x=(x(i),\ i=1,...,5)$[*].
The fifth component $x(5)=t_1$ defines the firing time. A vector
$y=(y(j),\ j=1,...,5)$ has a similar meaning for the second object.

Applying mixed strategies $x=(x_i,\ i=1,...,5)$ and
$y=(y_j,\ j=1,...,5)$, the utility functions of the first and the second objects are

\begin{eqnarray}U_a(x,y)=\sum_{r\in R} \sum_{s\in S} x(r) u(r,s) y(s),
\\ V_a(x,y)=\sum_{r\in R} \sum_{s\in S} x(r)
u(r,s) y(s).

Here $R=\{1,2,3,4\},\ S=\{1,2,3,4\}$, where $1,2,3,4$ denotes the four events $(z_0=0,a=0), (z_0=0,a=1), (z_0=1,a=0),
(z_0=1,a=1)$. The matrix


is the winning of the first object, if the pure strategy denoted by the quadruple $(t_1,t_2,r,s)$ is used. The matrix


is the winning of the second object, if the pure strategy $(t_1,t_2,r,s)$ is used. Expanding expressions (12.23)

\begin{eqnarray}U_a(x,y)=x(1) u(1,1) y(1)+x(2)u(2,1) y(1)+\nonumber\\ x(3)u(3,1)...
y(4)+\nonumber\\ x(3)u(3,4) y(4)+x(4) u(4,4) y(4).

Here winnings $u(r,s)$ are defined by expression (12.25). Expression (12.24) illustrates how the function $U_a(x,y)$ is defined. The function $V_a(x,y)$ is defined similarly.

One defines bimatrix games (12.23) and (12.24) for each pair of firing times $(t_1,\ t_2)$. Therefore, for fixed pair $(t_1,\ t_2)$, the same methods of search for the equilibrium points may be used as in the inspector model (13.14). Here winnings $u(r,s)$ and $v(r,s)$ depend on firing times $t_1,t_2$. This is the important difference from the inspector game. One updates bimatrix games while changing firing times.

Therefore, there are only two parameters $t_1$ and $t_2$ to be found by the fraud minimization (see section 4.1). One can do that by applying methods, such as the $Bayes$, to search for the pair $(t_1,\ t_2)$ that minimizes the fraud.

4 Search for Equilibrium

1 Bimatrix Game Algorithm

Consider the bimatrix game (12.23) and (12.24) defined by fixing a pair $(t_1,\ t_2)$ of firing times. We search for the equilibrium of this game using conditions (13.14) and (13.19)- (13.23). Here the matrix is small thus the proportion of pure strategy equilibrium points will not be considerable [Forgo et al., 1999]. Therefore we start by looking for such a mixed strategy that makes all the partner strategies equally profitable for him. That makes the partner fraud irrelevant. At fixed pair $(t_1,\ t_2)$ a mixed strategy $x(r)$ of the first object is "fraud irrelevant" if

\begin{eqnarray}\sum_{r \in R} x(r) v(r,s) =V, \ s \in S,
\sum_{r \in R} x(r) =1, \ x(r) \ge 0. \nonumber

There are four linear inequalities, five linear equations and five variables: $V,\ x(r), \ r\in R$, because the set $R$ has four elements.

A mixed strategy $y(s)$ of the second object is fraud irrelevant if

\begin{eqnarray}\sum_{s\ in S} u(r,s) y(s) =U,\ r \in R,
\sum_{s \in S} y(s) =1. \nonumber y(s) \ge 0. \nonumber

These expressions include inequalities what is usual in the linear programming (LP). Applying LP, the variables $U$ and $V$ should be expesed as differences of two non-negative variables $U=u_1-u_2$ and $V=v_1-v_2$. Then from the expressions (12.28), (12.29) one obtains the LP problem:

\begin{eqnarray}\max_{x,y,u,v} (u_1-u_2+v_1-v_2)\\ \sum_{j=1}^m u(i,j) y_j =
... 0,\ i,j=1,...,m, u_1 \ge 0,\ u_2 \ge 0, v_1 \ge 0,
v_2 \ge 0.

Here $x=(x_1,...,x_m),\ y=(y_1,...,y_m),\ u=(u_1,u_2),\

Similar linear fraud irrelevant equations were considered in the inspection problem (see expressions (13.42) and (13.43)). Here the fraud irrelevant equations (12.31) and (12.32) depend on two unknown parameters $t_1$ and $t_2$. Therefore, linear problems (12.31) and (12.32) are just a part of general non-linear problem. We use longer notations, such as $U=U(t_1,t_2)$ and $V=V(t_1,t_2)$, to show that $U,V$ depend on $t_1,t_2$.

If there is no solution of fraud irrelevant equations, we search for the pure equilibrium strategies using condition (13.14). The Strategy Elimination Algorithm (SE) is a good alternative to obtain equilibrium. A general but difficult way to find the equilibrium is to solve the bilinear problem (13.18) defining the necessary and sufficient equilibrium conditions.

Obtaining equilibrium in mixed strategies $x(r), y(s),\ r \in R,\ s \in S$ or in pure strategies $r,s$, one reduces the six-dimensional optimization problem to the two-dimensional one.

\begin{eqnarray}\min_{z \in B} \parallel z - T_t(z) \parallel^2.

Here $z=(t_1^0,t_2^0)$, where $(t_1^0,t_2^0)$ are the "contract" firing times. The operator $T_t$ is defined by the fraud condition

\begin{eqnarray}t_1^1 = arg \max_{t_1} U(t_1,t_2^0), \nonumber \\ t_2^1 = arg
\max_{t_2} V(t_1^0,t_2).

Here $U(t_1,t_2^0)$ is the expected winning of the first object. It is the solution of the corresponding bimatrix game at fixed firing times $t_1$ and $t_2^0$. $V(t_1^0, t_2)$ is the expected winning of the second object defined as the solution of the bimatrix game at given firing times $t_1^0$ and $t_2$. For example, winnings $U(t_1,t_2^0)$ and $V(t_1^0, t_2)$ can be defined by LP problems (12.31)(12.32), or by the Direct Search Algorithm (DSA) (see Section 2.1), or by Strategy Elimination Algorithm (SE) (see Section 2.4).

The equilibrium is achieved, if the minimum (12.36) is zero. The minimization of (12.36) is the task for global optimization algorithms, including the Bayesian ones.

2 Matrix Game Algorithm

The bimatrix game algorithm can be applied to both the symmetric and the non-symmetric cases. In the symmetric case the utilities satisfy the zero-sum game condition $U=-V$. Here one obtains the equilibrium by solving the following linear programming problems

\max_x \ U
\\ \sum_{i=1}^m x_i u_{ij} \ge U,\
j=1,...,m, \nonu...
...i \ge 0 \nonumber
\\ U=u_1-u_2,\ u_1 \ge 0,\ u_2 \ge 0, \nonumber \end{eqnarray}


\begin{eqnarray}\min_y\ V,
\\ -\sum_{j=1}^m y_j u_{ij} \le V, \
i=1,...,m, \non...
...i \ge 0 \nonumber
\\ V=v_1-v_2,\ v_1 \ge 0,\ v_2 \ge 0. \nonumber \end{eqnarray}

Here mini-max condition [Owen, 1968]

\begin{eqnarray}\min_y\ V = \max_x \ U

holds for all firing times $t_1,t_2$. Therefore, a search for the equilibrium is reduced to the condition

\min_{t_1,t_2} (U^2+V^2).

The zero-sum equilibrium is reached when the hitting probabilities are equal to 0.5 for both players and their winnings are zero (see the one-dimensional example in the next section).

5 One-dimensional Example

Applying expressions (12.1)-(12.19) in one-dimensional space the trajectories are

\begin{eqnarray}z(t) &=&t,\\ w(\tau)&=& 2-\tau, \end{eqnarray}

and the hitting probability is

\begin{eqnarray}p(t) =1-d(t)/D. \end{eqnarray}

Here $D \ge \max_t d(t)$. The control parameters are the firing times $x=t_1$ and $y=t_2$. Assuming that $D=2$ and applying expression (12.20), the expected utility function of the first object is

\begin{eqnarray}U(x,y)&=&\cases {x-(1-x), &if $x < y$, \cr
- y+(1-y), &if $x > y$, \cr x-y, & if

The expected utility function of the second object follows from expression (12.21)

V(x,y)&=&\cases {y-(1-y), &if $x > y$, \cr
- x+(1-x), &if $x < y$, \cr y-x, & if

The equilibrium is reached at the firing moment $x=y=0.5$. The one-dimensional example helps to understand and to test the results of the two-dimensional case.

6 Economic Duel. Nash Model

By the "Economic Duel" we mean a dynamic case of the Nash model. In Chapter 10, the static case was described.

There are $m$ servers providing the same service

\begin{eqnarray}u_i= u_i(x_1,y_1,...,x_m,y_m) = a_i y_i - x_i,\ i=1,...,m,

where $u_i$ is the profit, $y_i$ is the service price, $a_i$ is the rate of customers, $x_i$ is the running cost, and $i$ is the server index

Suppose that arriving customers estimate the average waiting time as the relation of the number of waiting customers $n_i$ to the server capacity $x_i$ that is assumed to be equal to the running cost

\begin{eqnarray}\gamma_i=n_i/x_i \nonumber

A customer goes to the server $i$, if

c_i \le c_j,\ j=1,...,m,\ j \not= i,\ c_i \le c_0. \nonumber

A customer goes away, if

\min_i c_i > c_0, \nonumber

where $c_0$ is the critical cost. The rate $a$ of incoming consumers is fixed

\begin{eqnarray}a= \sum_{i=0}^m a_i, \nonumber

To illustrate general ideas of the dynamic Nash model, we simplify the static one, first.

1 Simplified Nash Model. Static Case

We express the customer estimate of the expected waiting time as

\begin{eqnarray}\gamma_i = a_i / x_i,\ i=1,2.

Assume the customer balance condition.

\begin{eqnarray}a_i / x_i + y_i = q.

Here $q$ is the customer balance factor. The balance condition follows from (10.4),(10.5), (10.6), if the number of customers is large. The condition is based on the observation that arriving customers prefer servers with lesser sum $y_i+\gamma_i$. From expression (10.7)

\begin{eqnarray}q \le c. \end{eqnarray}

From (12.48)

\begin{eqnarray}a_i = (q-y_i) x_i,


\begin{eqnarray}u_i = (q-y_i) x_i y_i - x_i. \end{eqnarray}

Consider a monopolistic situation when only the $i$th server is there. In this case

\begin{eqnarray}a_i = a,\ if\ \ q \le c \end{eqnarray}


\begin{eqnarray}y_i + a/x_i \le c. \end{eqnarray}

Othervise, customers abandon services $a_i=0$. The maximal monopolistic profit

\begin{eqnarray}\max_{x_i,y_i} (a y_i -x_i),
\\ y_i+a/x_i \le c.

Replacing inequality (12.55) by equality, we eliminate (12.55) and maximize the expression

max_{y_i} (a y_i -a/(c-y_i)).

Now one defines the optimal service price

y_i^*=c-1, \end{eqnarray}

the optimal service capacity,

x_i^*=a, \end{eqnarray}

and the maximal monopolistic profit

u_i^*= a(c-2).

2 Dynamic Nash Model

In the previous section, considering the static model, it was tacitly assumed that a server can operate steadily while making loss instead of profit. Now we consider a case of insolvency, too. Suppose that a server gets broken, if the accumulated losses exceed some credit threshold. Here the bankrupt server is eliminated and the remaining one assumes a monopolistic position.

Denote by $U_i(t)$ a profit accumulated by the $i$th server at a time $t$. Assume that

U_i(t)=\int_{t_0}^t u_i(t) dt,

where $u_i(t)$ defines a profit at a moment $t$

\begin{eqnarray}u_i(t)=a_i(t) y_i(t)-x_i(t).

Here $a_i(t)$ is a rate of customers of the $i$th server at a moment $t$. This rate is formally defined as a limit of the fraction

\begin{eqnarray}a_i(t)=\lim_{\delta \rightarrow 0}\frac

where $A_i(t)$ is a total number of customers arrived during an interval $(0,t),\ t\le T$. The zero denotes the starting time of the system. $T$ is the end of the operation period, the "horizon". In real life $A_i(t)$ is defined at fixed moments, for example, minutes, hours, days e.t.c.. We consider it as a continuous parameter because we consider a continuous model.

Denote the bankruptcy threshold of an $i$th server by $U_i^*$. Denote a bankruptcy moment (if it happens) of the $i$th server by $t_i^*$. Then

\begin{eqnarray}a_i(t)&=&\cases {a, &if $t \ge t_j^*,\ t \le t_i^*, i,j=1,2,\ i
...\cr 0, &if $t \le t_j^*,\ t \ge t_i^*\,\
i,j=1,2,\ i \ne j$. \cr}

Otherwise, customer rates $a_i(t)$ are defined by the customer balance condition (12.50)[*] .

From (12.59) the profit

\begin{eqnarray}u_i(t)&=&\cases {u_i^*, &if $t \ge t_j^*,\ t \le t_i^*, i,j=1,2,...
...\cr 0, &if $t \le t_j^*,\ t \ge
t_i^*\,\ i,j=1,2,\ i \ne j$. \cr}

Otherwise, the profit $u_i(t)$ is defined by (12.61).

Assume that both servers are planning their dynamic competition in advance. Each server $i$ defines trajectories of the service price $y_i(t)$ and the service capacity $x_i(t)$. These trajectories show how the service prices and capacities depend on the time $t \in (0,T)$. Objectives are to maximize profits

\begin{eqnarray}U_i=U_i(T),\ i=1,2.

Here $U_i(T)$ defines the accumulated profit (12.60) of $i$th server at the end of period $t=T$.

We illustrate the idea by a simple example. Assume that servers control initial values $y_i(0),\ x_i(0)$ and rates of change $b_{y_i},\ b_{x_i}$ of parameters $y_i(t)\ x_i(t)$. Then the trajectories are defined by differential equations

\begin{eqnarray}d y_i(t)/dt =b_{y_i} y_i(t), \end{eqnarray}


d x_i(t)/dt =b_{x_i} x_i(t). \end{eqnarray}

The corresponding solutions are

\begin{eqnarray}y_i(t)=y^i(0) \exp(b_{y_i} y_i(t)), \end{eqnarray}


\begin{eqnarray}x_i(t)=x^i(0) \exp(b_{x_i} x_i(t)). \end{eqnarray}

1 Search for Nash Equilibrium

The search is similar to that in the Nash model (see Section 10). We fix the the initial values, the "Contract-Vector"
i=1, ... ,m)$. The transformed values, the "Fraud-Vector"
$z^1=(x_i(0)^1,b_{x_i}^1,y_i(0)^1,b_{y_i}^1,\ i=1,2)$, is obtained by maximizing the profits of each server $i$. The maximization is performed assuming that all partners $j \not= i$ honors the contract
$(x_i(0)^0,b_{x_j}^0,y_i(0)^0,b_{y_j}^0,\ j=1, 2\ j \not=i)$

\begin{eqnarray}(x_i(0)^1,b_{x_i}^1,y_i(0),b_{y_i}^1) =\nonumber \\
arg \max_{x...
{y_j}^0,\nonumber \\ j=1,2\ j \not=i ),\ i=1,2.

Here the symbol $U_i$ denotes the accumulated profit (12.65). Formally, condition (12.70) transforms the vector
$z^n=((x_i(0)^n,b_{x_i}^n,y_i(0)^n,b_{y_i}^n,\ i=1,2) \in B
\subset R^{8},\ n=0,1,2,...$ into the vector $z^{n+1}$. To make expressions shorter denote this transformation by $T$

\begin{eqnarray}z^{n+1} = T(z^n),\ n=0,1,2,...

One may obtain the equilibrium at the fixed point $z^n$, where

\begin{eqnarray}z^{n} = T(z^n).

The fixed point $z^n$ exists, if the feasible set $B$ is convex and all the profit functions (10.1) are convex [Michael, 1976]. We obtain the equilibrium directly by iterations (12.71) , if the transformation $T$ is contracting. If not, then we minimize the square deviation

\begin{eqnarray}\min_{z \in B} \parallel z - T(z) \parallel^2.

The equilibrium is achieved, if the minimum (12.73) is zero.

One makes the model more flexible by introducing second derivatives of parameters $y_i(t)$ and $x_i(t)$. The second derivatives represent accelerations. Assume that servers control the starting values $y_i(0),\ x_i(0)$, the rates $b_{y_i},\ b_{x_i}$ and accelerations $g_{y_i},\ g_{x_i}$ of parameters $y_i(t)\ x_i(t)$. Then trajectories are defined by differential equations

\begin{eqnarray}d^2 y_i(t)/dt^2=g_{y_i} d y_i(t)/dt +b_{y_i} y_i(t),


\begin{eqnarray}d^2 x_i(t)/dt^2=g_{x_i} d x_i(t)/dt +b_{x_i} x_i(t).

Here the vector $z^n$ has twelve components.
i=1,2) \in B \subset R^{12},\ n=0,1,2,...$ That means high dimension of the fraud minimization problem (12.73). In addition, one needs to test the convexity of accumulated profits $U_i$ defined by expressions (12.65) as functions of parameters
$x_i(0),b_{x_i},y_i(0),b_{y_i},\ i=1,2$. If they are convex then one obtains equilibrium by condition (12.73). Otherwise, some specific algorithms should be designed. An example is the inspector problem.

7 Software Example

An interactive duel model is implemented using Java JDK1.2 [Matulevicius, 1999]. To run the Java applet go to web-sites (see section 4), start $GMJ2$ and select the task $DuelStarwars$ (see Figure 9.6).
The task window, in Figure 12.1, shows the domain of firing times $0 \le t_1 \le 1,\ 0 \le t_2 \le 1$.
Figure 12.1: The duel task window.

One selects the optimization method from the $Methods$ menu (see Figure 9.7). The method $Bayes$ is selected (see Figure 12.2).

Figure 12.2: The Bayes method selection.
The total number of iterations is set to forty. The number of initial iterations is set to five.

The optimization results are in Figure 12.3.

Figure 12.3: The results.
To start the analysis of results one touches the $Analysis$ button (see Figure 9.9).
The $Convergence$ mode shows how the deviation from the equilibrium depends on the iteration number (see Figure 12.4).
Figure 12.4: The convergence window.

1 Operating Duel

The duel starts by selecting $DuelStarwars$ mode in the $Analysis$ menu (see Figure 9.8).
To run duel press the $Start$ button (see Figure 12.5).

Figure 12.5: The starting window.
One selects the operation mode in the operation window (see Figure 12.6).
Figure 12.6: The operation window in the $Computer\ vs\ Computer$ game.
There are three operation modes: Figure 12.6 shows the $Computer\ vs\ Computer$ mode. The
$Current\ situation$ field shows parameters $z_0,w_0,a,b$ generated at random using probabilities defined by equilibrium conditions. The firing times $t_1,t_2$ are decided by these conditions, directly.

In the $Person\ vs\ Computer$ game the person fix parameters $z_0, a,t_1$ of the first object. A window to fix them is in the center of the top Figure 12.7.

Figure: The input (top) and the outcome (bottom) of the $Person\ vs\ Computer$ game.
\begin{figure}\centerline{ \epsfig{file=duel5.eps,height=8.5cm,width=12.0cm}}\ce...
...{ \epsfig{file=gmjduel.pvsc0.eps,height=8.5cm,width=12.0cm}}\protect\end{figure}
Probabilities of parameters $w_0,b$ of the second object and its firing time $t_2$ are defined by equilibrium conditions (12.33),(12.32), and (12.36). Parameters $w_0,b$ of the second object, and the actual hitting, are defined by probabilities. Therefore, the outcome is random.

Figure 12.8 shows the input of the $Person\ vs\ Person$ game.

Figure 12.8: The input of the $Person\ vs\ Person$ game.
\begin{figure}\centerline{ \epsfig{file=gmjduel.psvsps.eps,width=12.0cm}
In this game parameters of both objects are fixed by persons using the input window. The outcome of this game is shown in a form similar to the $Person\ vs\ Computer$ game (see the bottom Figure 12.7).
One sets the speed of moving objects in the $Speed$ window (see Figureduelout1).
Figure 12.9: The speed control window.
Figures 12.10 show $help$ windows.
Figure 12.10: The $help$ pages.
\begin{figure}\centerline{ \epsfig{file=gmjduel.help1.eps,height=8.5cm,width=12....
...terline{ \epsfig{file=gmjduel.help2.eps,height=8.5cm,width=12.0cm}

2 Future Developments

In the example, one searches for equilibrium in mixed strategies using the Irrelevant Fraud Algorithm (IF) which is described in section 2.3. This algorithm solves linear programming problems (12.31) and (12.32). If no solution is found, one searches for pure equilibrium strategies using the Direct Search Algorithm (DS) (13.14). The Strategy Elimination Algorithm (SE) (see section 2.4) helps, if both IF and DS fail.

The main future task is to implement flexible economic models. Another task is the implementation of second-order economic duel models. These tasks need additional theoretical investigations and software developments. Creation of entertaining graphical interfaces is important, too.

jonas mockus 2004-03-20