Subsections


10 Competition Model
with Fixed Resource Prices,
Nash Equilibrium

1 Optimization Problems in Market Models

We consider two models of the theory of games and markets [Rosenmuller, 1981]. The Nash equilibrium [Nash, 1950] is applied. The "market" is represented by a collection of independent servers. Each server tries to maximize its profit by setting optimal service prices and optimal server rates. The server rate is the average number of customers that would be served in nonstop operation. Thus, we can consider this rate as the server capacity.

The profits of individual servers are maximized assuming that their partners respect some agreement about the service prices and rates. We call this the "Contract-Vector." Service prices and rates obtained by maximizing individual profits of the servers transform the Contract-Vector into the "Fraud-Vector." The optimization problem is to search for such Contract-Vector that reduces the deviation of the Fraud-Vector from the Contract-Vector. That makes the fraud less relevant. The fraud is irrelevant and the Nash equilibrium is achieved, if this deviation is zero. Then servers cannot increase their profits by changing service prices and server rates agreed by the contract.

The quality of service is defined by the average time lost by customers while waiting for services. Formal expressions of other quality measures are difficult. A customer prefers the server with lesser total service cost. The total cost includes the service price plus waiting losses. A customer goes away, if the total cost exceeds a certain critical level. A flow of customers is stochastic. Service times are stochastic, too. There is no known analytical solution for this model. The results are obtained by Monte-Carlo simulation.

The first market model considers a simple case. Here the server rate depends on a single resource that is freely obtainable at some market price. This model is the "Nash model," for short. In the second model, it is supposed that the server rates depend on several resources. Each server owns a fixed amount of single resource. Thus, the number of different resources is assumed to be equal to the number of servers. The resource limits are controlled using resource prices that are expressed as Lagrange multipliers[*]. This model is the "Walras model," for short. It is similar, but not identical, to the traditional Walras model [Walras, 1874].

In the Nash model, the servers rent their equipment at fixed price per unit of the server capacity. Therefore, in the Nash model we are looking for the equilibrium of server capacities and service prices. In the Walras model, the servers share each other resources. Therefore, here we are looking for the optimal prices charged for the shared resources. That is in addition to the search of equilibrium of server capacities and service prices.

Here the market models are to illustrate the possibilities and limitations of the optimization theory and numerical techniques in competitive environments. Both the models are developed as test functions for the Bayesian algorithms. However, these simple models may help to design more realistic ones describing the processes of competition better. Besides, the simplified competitive models are convenient tools for teaching the Operations Research and the Theory of Games and Markets.

2 Nash Model

Let us consider $m$ servers providing the same service

\begin{eqnarray}u_i= u_i(x_1,y_1,...,x_m,y_m) = a_i y_i - x_i,\ i=1,...,m,
\end{eqnarray}


where $u_i$ is the profit, $y_i$ is the service price, $a_i$ is the rate of customers, $x_i$ is the running cost, and $i$ is the server index. Assume that a server capacity $w_i$ is an increasing function of the running cost $x_i$:

\begin{eqnarray}w_i = \phi_i (x_i).
\end{eqnarray}


A simple example of this function

\begin{eqnarray}w_i =k_{i} (1-\exp(- k_{i0} x_{i})).
\end{eqnarray}


Here $k_i$ defines the maximal server rate and $k_{i0}$ shows the efficiency of resource $x_i$. Then the total service cost

\begin{eqnarray}c_i = y_i + \gamma_i,
\end{eqnarray}


where $\gamma_i$ is the average waiting cost. To simplify the expressions assume that the waiting cost is equal to an average waiting time. Suppose that arriving customers estimate the average waiting time as the relation of the number of waiting customers $n_i$ to the server capacity $w_i$

\begin{eqnarray}\gamma_i=n_i/w_i
\end{eqnarray}


A customer goes to the server $i$, if

\begin{eqnarray}c_i \le c_j,\ j=1,...,m,\ j \not= i,\ c_i \le c_0.
\end{eqnarray}


A customer goes away, if

\begin{eqnarray}\min_i c_i > c_0,
\end{eqnarray}


where $c_0$ is the critical cost. The rate $a$ of incoming consumers is fixed

\begin{eqnarray}a= \sum_{i=0}^m a_i,
\end{eqnarray}


where $a_0$ is the rate of lost customers. Conditions (10.6) and (10.7) separate the flow of incoming customers into $m+1$ flows. This makes the problem very difficult for analytical solution. The separated flow is not simple one, even in the case when the incoming flow is Poisson [Gnedenko and Kovalenko, 1987]. Thus, we need the Monte Carlo simulation, to define average rates of customers $\ a_i,\ i=0,1,...,m$, by conditions (10.6) (10.7), and average profits $u_i,\
i=1, ... ,m$ by expression (10.1).


1 Customers of Different Taste

In reality, different customers value their time differently. In the simple case, one divides these customers into two categories: busy and very busy. The time coefficient of the busy customers is set to one. The time of very busy customers is denoted by $k_b >1$. Therefore, parameters $h_i$ are different.
For busy customers

\begin{eqnarray}h_i=y_i+\gamma_i.
\end{eqnarray}


For very busy customers

\begin{eqnarray}h_i^b=y_i+k_b. \gamma_i
\end{eqnarray}


Assume that the probability of very busy customer is $p_b$, then the probability of busy customer is $1-p_b$.


3 Search for Nash Equilibrium

First we fix the initial values, the "Contract-Vector" $z^0=(x_i^0,y_i^0,\ i=1, ... ,m)$. The transformed values, the "Fraud-Vector" $z^1=(x_i^1,y_i^1,\ i=1, ... ,m)$, is obtained by maximizing the profits of each server $i$. The maximization is performed under the assumption that all partners $j \not= i$ will honor the contract $(x_j^0,y_j^0,\ j=1, ... ,m,\ j \not=i)$

\begin{eqnarray}(x_i^1, y_i^1) = arg \max_{x_i,y_i}
u_i(x_i, y_i, x_j^0,y_j^0,\ j=1, ... ,m,\ j \not=i ),\
i=1,...,m.
\end{eqnarray}


Formally, condition (10.11) transforms the vector
$z^n=(x_i^n,y_i^n,\ i=1,...,m) \in B \subset R^{2m},\
n=0,1,2,...$ into the vector $z^{n+1}$. To make expressions shorter denote this transformation by $T$

\begin{eqnarray}z^{n+1} = T(z^n),\ n=0,1,2,...
\end{eqnarray}


One may obtain the equilibrium at the fixed point $z^n$, where

\begin{eqnarray}z^{n} = T(z^n).
\end{eqnarray}


The fixed point $z^n$ exists, if the feasible set $B$ is convex and all the profit functions are convex [Michael, 1976]. We obtain the equilibrium directly by iterations (10.12), if the transformation $T$ is contracting [Neuman and Morgenstern, 1953]. If not, then we minimize the square deviation

\begin{eqnarray}\min_{z \in B} \parallel z - T(z) \parallel^2.
\end{eqnarray}


The equilibrium is achieved, if the minimum (10.14) is zero. If the minimum (10.14) is positive then the equilibrium does not exist. That is a theoretical conclusion. In statistical modeling, some deviations are inevitable. Therefore, we assume that the equilibrium exists, if the minimum is not greater then modeling errors.

One can minimize deviation (10.14) by the usual stochastic approximation techniques [Ermoljev and Wets, 1988], if square deviation (10.14) is an unimodal function of $z$. If not, then the Bayesian techniques of global stochastic optimization [Mockus, 1989a] should be used. The global stochastic optimization may outperform the local one in the unimodal case, too. That happens, if the noise level is great because the Bayesian global stochastic optimization methods are less sensitive to large noise levels.

The Equilibrium is stable, if transformation (10.12) is locally contracting near a fixed point (10.13). If not, then some stabilizing conditions should be introduced. The transformation $T(z)$ is referred to as locally contracting, if there exists a constant $0 \le \alpha < 1$ such that

\begin{eqnarray}\Vert T(z^1) - T(z^2) \Vert \le \alpha \Vert z^1 - z^2 \Vert
\end{eqnarray}


for all $z^1, z^2 \in Z_{\epsilon}$. Here $Z_{\epsilon}$ is a $\epsilon$-vicinity defined by the "natural" deviations from the equilibrium.

The same result one obtains by the following condition.

\begin{eqnarray}\min_{z \in B} \sum_i (u_i(T(z))-u_i(z)).
\end{eqnarray}


Here the difference $u_i(T(z))-u_i(z)$ shows the profit obtained by $i$-th server by breaking the contract $z$. We call the sum (10.16) as a fraud profit. Minimization of the fraud profit seems natural in economical terms and is convenient for computations. In these terms equilibrium means a contract where fraud is not profitable.

1 Existence of Nash Equilibrium

The first existence theorem is due to Nash [Nash, 1951] and dates back to 1951. Many generalizations appeared since then. Finding less and less restrictive sufficient conditions have been an active field of research [Forgo et al., 1999]. The proofs of these conditions are based on the various fixed point theorems [Brouwer, 1912,Kakutani, 1941,Browder, 1968]. Considering the examples of this book, we prefer simple existence conditions to the general ones. Testing the existence conditions, we express them in terms of the profit functions $u$ instead of the operators $T$.

For example, the stable equilibrium exists, if the profit $u(z)$ is strictly convex function of all the components of its parameters $z \subset Z$, and $Z$ is a convex set [Michael, 1976]. In such cases, small changes of $z$ components will not change the maximum points considerably.

The situation would be different in the non-strictly-convex cases. Here even very small change of some parameters $z$, may change the maximum point $z^*$ of $u(z)$ considerably. For example, in linear cases this point may jump from minimal to maximal limits. In multi-modal cases the point $z^*$ can jump from one local minimum to another one. These sharp changes violate the continuity of the transformation $T$. The continuity of $T$ is needed in the Brouwer's fixed point theorem [Brouwer, 1912]. In other theorems, such as Kakutani's [Kakutani, 1941] or Browder's [Browder, 1968], the fixed-point conditions are less restrictive. However, testing these conditions is not a trivial task.


4 Stable Coalition, Core of Games

If $m>2$, the formation and stability of a coalition $S
\subset M$ are important issues.
Denote by $S=i$ a coalition of an individual server $i$.
Denote by $S=(i_1,i_2)$ a coalition of two servers $i_1$ and $i_2$.
Denote by $S=(i_1,...,i_m)=M$ a coalition of all $m$ servers.
Assume that service prices $y_i=y_i(S)$, service rates $x_i=x_i(S)$, and shares of profit $u_i= u_i(S)$ are determined by the coalition agreement for all the coalition members $i \in S$ .
Denote by $\vert S\vert$ the number of servers in the coalition $S$ and by $u(S)=\sum_{i \in S} u_i$. Suppose that all the remaining servers form the opposite coalition $M \setminus S$. Assume that the profits $u_i$ corresponds to equilibrium service prices and rates $y(S),x(S),
y(S),y(M \setminus S)$. This way we consider the $m$-server system as a set of two-server systems, where the first server is coalition $S$ and the second one is coalition $M \setminus S$. A coalition $S^* \subset M$ is stable if there is no dominant coalition $S
\subset M$ such that

\begin{eqnarray}u_i(S) > u_i(S^*),\ for\ all \ i \in S
\end{eqnarray}


The definition of the stable coalition is related to the definition of a game core ${\bf C}$ [Rosenmuller, 1981].

It is well known that if

\begin{eqnarray}u(M)>\sum_{i \in M} u_i(M),
\\ u(S) + u(M \setminus
S)=u(m)
\end{eqnarray}


then the game core is empty ${\bf C}=\emptyset$ and there is no stable coalition $S$.

Inequality 10.18 means that a game is essential. Equality 10.19 defines a constant-sum game [Owen, 1968]. The server system is clearly not a constant-sum game. Thus one of two "non-existence" conditions (10.19) is not satisfied meaning that a stable coalition $S^*$ may exist. If a stable coalition $S^*$ exists, one may determine it by testing 10.17 for all pairs of coalitions. The game core $C$ is empty, if for any $S' \subset M$ there is a dominant coalition $S$.


1 Example of Server Coalition

Consider a case of three servers $m=3$.
Denote by $S_1=i$ the coalition of an individual server $i$.
Denote by $S_2=(i_1,i_2)$ the coalition of two servers $i_1$ and $i_2$.
Denote by $S_3=(i_1,i_2,i_3)$ the monopolistic coalition of all three servers.
Subscripts $ k=1,2,3$ denote numbers of servers in the coalition $S_k$.
Consider for simplicity the symmetric case. Here profit functions $u$ and control parameters $x,y$ depend only on the number of servers in the coalition but not on their "names."

The symmetric case implicitly assumes equality of the control parameters and equal sharing of the coalition profit, meaning that in the case of $S_3$

\begin{eqnarray}u_i(S_3)= (u_i+u_j+u_l)/3,\\ x_i(S_3)=x_j=x_l,\
y_i(S_3)=y_i=y_j=y_l. \end{eqnarray}


In the case of $S_2$

\begin{eqnarray}u_i(S_2)= (u_i+u_j)/2,\ \ x_i(S_2)=x_i=x_j,\ y_i(S_2)=y_i=y_j.
\end{eqnarray}


In the case of individual servers $S_1$

\begin{eqnarray}u_i(S_1)= u_i,\ x_i(S_1)=x_i,\ y_i(S_1)=y_i. \end{eqnarray}


The monopolistic coalition $S_3$ is stable if $u_i(S_3) \ge
u_i(S_2)$ and $u_i(S_1) \ge u_i(S_1)$.
The coalition $S_2$ is stable if $u_i(S_2) \ge u_i(S_3)$ and $u_i(S_2) \ge u_i(S_1)$.
The coalition $S_1$ is stable if $u_i(S_1) \ge u_i(S_3)$ and $u_i(S_1) \ge u_i(S_2)$.
There is no stable coalition if no of these condition are satisfied.
jonas mockus 2004-03-20