Subsections


18
Sequential Statistical Decisions Model, "Bride Problem"


1 Introduction

Most of the decisions in a personal life and in organizations are made sequentially. That means that, at any given moment, one either makes the final decision, or postpones it hoping for better times. Statistical part of sequential decisions represents uncertainties of future events and a limited reliability of our observations.

The Bride problem is a good example of sequential statistical decisions [Wald, 1947,Wald, 1950]. The dynamic programming is a conventional technique to optimize sequential decisions [Bellman, 1957]. Applying the dynamic programming to specific problems, one develops specific algorithms, as usual. The algorithms, similar to these of bride's decision making, can be applied choosing the optimal time to buy durable goods, such as cars, houses, computers, to start new business, e.t.c..

Typical industrial applications of sequential decisions are the optimal stopping rules. For example, one can consider stopping of some nuclear plant as a marriage. One can evaluate stopping cost directly. Risks involved in running the plant, when one observes some deviations from the normal routine, can be considered as risks of waiting for the next proposal. That means that one hopes for something better while risking a bad outcome.


2 Average Utility

The Bride problem is to maximize the average utility of marriage by the optimal choice of a groom. Denote the actual goodness of a groom $i$ by $\omega_i$. Denote by $s_i$ the bride's impression about the groom $i$. $p(\omega_i$ is a prior probability density of goodness $\omega_i$. $p_s(s_i\vert\omega_i)$ is a probability density of impression $s_i$. Assume that goodness of different grooms are independent and identically distributed. This means that a prior probability density of goodness is

\begin{eqnarray}
p(\omega_i,\omega_j)=p(\omega_i) p(\omega_j),
\\
p(\omega_i)=p(\omega_j)=p(\omega).\nonumber \end{eqnarray}


Suppose that impressions about the goodness of grooms are independent and identically distributed. Then the probability density of an impression $s_i$, given the goodness $\omega_i$, is

\begin{eqnarray}
p_s(s_i, s_j \vert \omega)=p_s(s_i \vert \omega) p_s(s_j \vert ...
...s_i \vert \omega)=p_s(s_j
\vert\omega)=p(s\vert\omega).\nonumber
\end{eqnarray}


Assume the Gaussian prior probability density of goodness

\begin{eqnarray}
p(\omega)= {1 \over \sqrt {2\pi} \sigma_0} e^{-1/2 ( {\omega - \alpha_0
\over \sigma_0})^2}.
\end{eqnarray}


Here $\sigma_0$ is a prior variance and $\alpha_0$ is a prior mean of goodness. For example, $\alpha_0 >0$ shows an optimistic bride. $\alpha_0 <0$ shows that a bride is pessimistic. Suppose, that a prior probability density of bride impressions is

\begin{eqnarray}
p(s\vert\omega)= {1\over \sqrt {2\pi} \sigma} e^{-1/2 ( {s-\omega
\over \sigma})^2}.
\end{eqnarray}


Here $\sigma^2$ is a variance of impressions around the true goodness $\omega$.

Assume that both the groom goodness and the bride impression are random variables depending on many independent factors. This explains the Gaussian distributions (18.3) and (18.4). One defines a posterior probability density of goodness $\omega$, given the impression $s$, by the Bayesian formula [Bayes, 1783]

\begin{eqnarray}
p(\omega\vert s)= {p(s\vert\omega) p(\omega) \over p_s(s)}.
\end{eqnarray}


Here

\begin{eqnarray}
p_s(s)= \int_{-\infty}^{\infty} p(s\vert\omega) p(\omega) d\omega.
\end{eqnarray}



3 Single-Marriage Case

Denote by $d_i$ the bride's decision about the groom $i$

\begin{eqnarray}
d_i &=&\cases {1, &if $bride\ marry\ the\ groom\ i$, \cr 0, &otherwise.\cr}
\end{eqnarray}


Suppose that

\begin{eqnarray}
\sum_{i=1}^m d_i= 1.
\end{eqnarray}


The last condition means that brides marry, and marry only once. Formally condition (18.8) defines the set of feasible decisions when the $n$-th groom proposes

\begin{eqnarray}
D_{N-n} &=&\cases {0\ and\ 1, &if $g_{N-n} =0$\ \cr
0, &if $g_{N-n}=1$,\cr}
\end{eqnarray}


Here $g_{N-n}$ is the marriage index

\begin{eqnarray}
g_{N
-n}=1 -\sum_{i=1}^{N-n-1}
\end{eqnarray}


The marriage index is zero, if the bride is marred. This prevents repeated marriages.


1 Bellman's Equations

The expected utility function is $u(s)$. Here $s$ is the impression made by the successful groom[*].

\begin{eqnarray}
u(s)=\int_{-\infty}^{\infty} \omega p(\omega\vert s) d\omega.
\end{eqnarray}


Denote by $u_N(s)$ the expected utility function, if the impression of the last groom is $s$

\begin{eqnarray}
u_N(s)=\int_{-\infty}^{\infty} \omega p(\omega\vert s) d\omega.
\end{eqnarray}


Comparing (18.11) and (18.12) one observes that

\begin{eqnarray}
u(s)=u_N(s). \end{eqnarray}


This follows from independence assumptions (18.1) and (18.2). Denote by $u_{N-1}$ the expected utility, if the impression of the $(N-1)$ th groom is $s$ and a bride is making the optimal decision $d=d_{N-1}(s) \in D_{N-1}$

\begin{eqnarray}
u_{N-1}(s)=\max_d (d u(s) + (1-d) u_N),
\\
d_{N-1}(s)=arg \max_d (d u(s) + (1-d) u_N).
\end{eqnarray}


Here

\begin{eqnarray}
u_N=\int_{-\infty}^{\infty} u_N(s) p_s(s) ds.
\end{eqnarray}


Following the same pattern, we define the expected utility, if the impression of the $(N-n)$-th groom is $s$ and the bride is making the optimal decision $d=d_{N-n}(s) \in D_{N-n}$

\begin{eqnarray}
u_{N-n}(s)=\max_d (d u(s) + (1-d) u_{N-n+1}),
\\
d_{N-n}(s)=arg \max_d (d u(s) + (1-d) u_{N-n+1}).
\end{eqnarray}


Here

\begin{eqnarray}
u_{N-n+1}=\int_{-\infty}^{\infty} u_{N-n+1}(s) p_s(s) ds.
\end{eqnarray}


Note, that the utility $u(s)$ of accepting a proposal in expressions (18.17) and (18.18) is a function only of impression $s$. It does not depend on the proposal number $N-n$. That follows from independence assumptions (18.1) and (18.2). Solving these recurrent equations one defines the sequence of optimal decision functions $d_{N-n}(s) \in D_{N-n}$ and the expected utilities $u_{N-n}(s)$. This should be done for all possible impressions $s \in (-\infty,\infty)$ and for all numbers $n=1,...,N-1$. One cannot do that in continuous case. Therefore, one uses a discrete approximation


2 Discrete Approximation

From expressions (18.11) and (18.12), replacing the integrals by sums one obtains that

\begin{eqnarray}
u_N(s)= u(s) =2M/K \sum_{k=1}^K \omega_k p(\omega_k\vert s).
\end{eqnarray}


From expression (18.16)

\begin{eqnarray}
u_N=2M/K \sum_{k=1}^K u_N(s_k) p_s(s_k).
\end{eqnarray}


From expression (18.19)

\begin{eqnarray}
u_{N-n+1}=2M/K \sum_{k=1}^K u_{N-n+1}(s_k) p_s(s_k).
\end{eqnarray}


Here $\omega_k \in [-M,M],\ \omega_1=-M,\ \omega_K=M$ and $s_k
\in [-M,M],\ s_1=-M,\ s_K=M$. That is a discrete approximation of the recurrent equations. All possible impressions $s_k \in [-M,M]$ and all numbers $n=1,...,N-1$ are considered. One sets the number of iterations $K$ by the accuracy needed.

The results are sequences of optimal decision functions $d_{N-n}(s_k)$ and the expected utilities $u_{N-n}(s_k)$. These sequences are stored in a set of arrays which define how the optimal decisions $d$ and the expected utilities $u$ depend on the possible impressions $s_k,\ k=1,...,K$. Doing this, one avoids the repeated calculations. Large arrays is a disadvantage of this procedure.

3 Including the Waiting Cost

The waiting losses are important in many real-life sequential decision problems. Denote by $c$ the loss of waiting for the next groom. Including this parameter into Bellman equations one obtains

\begin{eqnarray}
u_{N-1}(s)=\max_d (d u_N(s) + (1-d) (u_N - c)),
\\
d_{N-1}(s)=arg \max_d (d u_N(s) + (1-d)(u_N-c)).
\end{eqnarray}


In a similar way one defines the expected utility if the impression of the $(N-n)$-th groom is $s$ and the bride is making the optimal decision $d_{N-n}(s)$

\begin{eqnarray}
u_{N-n}(s)=\max_d (d u_N(s) + (1-d) (u_{N-n+1}-c)),
\\
d_{N-n}(s)=arg \max_d (d u_N(s) + (1-d) (u_{N-n+1}-c)).
\end{eqnarray}


The other expressions remains the same.

4 Non-Linear Case

Expression (18.12) was defined assuming the linear bride's utility function. It was supposed that bride's utility is equal to the goodness of groom $u(\omega)=\omega$. In the real life, utility functions are nonlinear (see chapter 14) and non-convex, as usual. Then expression (18.12) is replaced by this integral

\begin{eqnarray}
u_N(s)=\int_{-\infty}^{\infty} u(\omega) p(\omega\vert s) d\omega.
\end{eqnarray}



4 Multi-Marriage Case

Condition (18.8) implies that brides marry and marry only once. No divorce is permitted. That is wrong, if the "marriage" represents buying a Personal Computer (PC). We illustrate this by a simple "Buy-a-PC" example.


1 " Buy-a-PC" Example

Define PC parameters by a vector $g=(g_1,g_2,g_3)$. Here $g_1$ is the speed of CPU in $MHz$, $g_2$ is the volume of RAM in $MB$, and $g_3$ is the volume of HD in $GB$. Express a subjective utility of PC by the weighted sum $\omega=a_1 g_1+a_2g_2+a_3g_3$. Here $a_i$ are expressed in $ per unit.

Assume that a prior probability density of PC utilities is Gaussian

\begin{eqnarray}
p_t(\omega)= {1 \over \sqrt {2\pi} \sigma_0} e^{-1/2 ( {\omega - \alpha_t
\over \sigma_0})^2}.
\end{eqnarray}


Here $\sigma_0$ is a prior variance and $\alpha_t$ is a prior mean of PC utilities. Suppose that $\sigma_0=constant$ and that $\alpha_t=\alpha_0+\alpha_1 t$. This means that expected PC utility is increasing linearly. The expected diversity remains the same.

The PC price in $ is denoted by $l$. Suppose that the price of PC depends linearly on the weighted sum

\begin{eqnarray}
l=b_1 g_1+b_2g_2+b_3g_3.
\end{eqnarray}


Here parameters $b_i$ are defined in $ per unit and reflect the market prices of CPU, RAM, and HD. Expressing the price $l$ as a function of utility $\omega$:

\begin{eqnarray}
l=h_0 \omega.
\end{eqnarray}


Here

\begin{eqnarray}
h_0= \frac{b_1 g_1+b_2g_2+b_3g_3}{a_1 g_1+a_2g_2+a_3g_3}.
\end{eqnarray}


Assume that one observes the utility $\omega$ exactly. This means that the impression $s=\omega$ and that impression errors $\sigma=0$ in expression (18.2). That simplifies the problem.


2 Bellman's Equations

The expected utility function is $u(\omega, q)$. Here $\omega$ is the utility of a new PC. $q$ is the utility of the old PC, to be replaced by the new one. Consider a "horizon" of $N$ years. During a year one can change PC only once, if one wishes.

Denote by $u_N(\omega,q)$ the maximal expected utility in the year $N$

\begin{eqnarray}
u_N(\omega,q)=\max_d (d \omega +(1-d)(q_N-c_N)).
\end{eqnarray}


There are two possible decisions $d=\{0,1\}$. The decision $d=1$ means to buy a new PC. The utility of the new PC is $\omega$. The utility of the old one is $q$.

The utility of the decision $d=0$, to keep the old PC in the last year $N$, is $q_N+c_N$. Here $c_N=\tau_N-l_N$ is the price of refusing to buy a new PC. This includes the waiting losses $\tau$ minus the price $l_N$ of new PC in the year $N$ defined by (18.30). It is assumed that we abandon the old PC as soon as we obtain the new one. Therefore, one "wins" the price $l$ of the new PC by using the old PC. The optimal decision in the last year $N$

\begin{eqnarray}
d_{N}^* &=&\cases {1, &if $\omega_N > q_N-c_N$, \cr
0, &if $\omega_N \le q_N-c_N$.\cr}
\end{eqnarray}


Denote by $u_{N-1}(\omega,q)$ the maximal expected utility in the year $N-1$.

\begin{eqnarray}
u_{N-1}(\omega,q)=\max_d (d \omega + (1-d)( u_N(q_N)-c_{N-1})).
\end{eqnarray}


Here $u_N(q)$ is the maximal expected utility in the year $N$, if the utility of the old PC is $q$.

\begin{eqnarray}
u_N(q)=\int_{-\infty}^{\infty}u_N(\omega,q)p_N(\omega) d\omega.
\end{eqnarray}


$p_N(\omega)$ is a prior probability density of $\omega$ defined by expression (18.28) at a time $t=N$.

\begin{eqnarray}
q_N &=&\cases {\omega_{N-1}, &if $q_N < q_{N-1}^*$, \cr
q_{N-1} , &if $q_N \ge q_{N-1}^*$.\cr}
\end{eqnarray}


Here $q_{N-1}^*$ is obtained from this equation

\begin{eqnarray}
\omega_{N-1}=u_N(q_{N-1}^*)-c_{N-1}
\end{eqnarray}


The optimal decision in the year $N-1$

\begin{eqnarray}
d_{N-1}^* &=&\cases {1, &if $\omega_{N-1} > u_N(q_N)-c_{N-1}$, \cr
0, &if $\omega_{N-1} \le u_N(q_N)-c_{N-1}$.\cr}
\end{eqnarray}


Denote by $u_{N-i}(\omega,q)$ the maximal expected utility in the year $N-i,\ i=1,...,N-1$. Then

\begin{eqnarray}
u_{N-i}(\omega,q)=\max_d (d \omega + (1-d)( u_{N-i+1}(q_{N-i+1})-c_{N-i})).
\end{eqnarray}


Here $u_{N-i+1}(q)$ is the maximal expected utility in the year $N-i+1$, if the utility of the old PC is $q$.

\begin{eqnarray}
u_{N-i+1}(q)=\int_{-\infty}^{\infty}u_{N-i+1}(\omega,q)p_{N-i+1}(\omega) d\omega.
\end{eqnarray}


$p_{N-i+1}(\omega)$ is a prior probability density of $\omega$ defined by expression (18.28) at a time $t=N-i+1$.

\begin{eqnarray}
q_{N-i+1} &=&\cases {\omega_{N-i}, &if $q_{N-i+1} < q_{N-i}^*$, \cr
q_{N-i} , &if $q_{N-i+1} \ge q_{N-i}^*$.\cr}
\end{eqnarray}


$q_{N-i}^*$ is obtained from the equation

\begin{eqnarray}
\omega_{N-i}=u_{N-i+1}(q_{N-i}^*)-c_{N-i}
\end{eqnarray}


The optimal decision in the year $N-i$

\begin{eqnarray}
d_{N-i}^* &=&\cases {1, &if $\omega_{N-i} > u_{N-i+1}(q_N-i+1)-...
... \cr
0, &if $\omega_{N-i} \le u_{N-i+1}(q_{N-i+1})-c_{N-i}$.\cr}
\end{eqnarray}


Here the decision functions $d_{N-i}^*$ depend on two variables $\omega, q$ defining the corresponding utilities of the new and the old computer. That is difficult for both the calculations and grafical representations.

One applies the discrete approximation such as in Section 3.2. The optimal decision functions depend on two variables $\omega$ and $q$. Therefore, one needs $K$ times more calculations to obtain the same accuracy as in the single marriage case (see Section 3.2).

To solve real life problems the "directly unobservable" factors should be included into the "Buy-a-PC" model. For example , one can estimate the expected life-span $T(s)$ of PC, given the impression $s$, by the Bayesian formula

\begin{eqnarray}
T=\int_{-\infty}^{\infty} \tau p(\tau\vert s) d\tau,
\\
p(\tau\vert s)= {p(s\vert\tau) p(\tau) \over p_s(s)}.
\end{eqnarray}


Here $\tau$ is the life-span of PC and $s$ is the user's impression about its reliability. The impression $s$ depends on the warranty, the reputation of the manufacturer and on some subjective factors, too.

1 One-Step-Ahead Approximation

To simplify the calculations, the "One-Step-Ahead" approximation is suggested. Here one assumes that the next step is the last one and introduces the parameter $r=\omega_{N-1}-q_{N-1}+c_{N-1}$.

The one-step-optimal decision

\begin{eqnarray}
d_{N-1}(r) &=&\cases {1, &if $r \ge 0$, \cr
0, &if $r < 0$.\cr}
\end{eqnarray}


Following the same pattern, one defines the maximal expected utility in $(N-n)$-th year

\begin{eqnarray}
u_{N-n}(\omega,q)=\max_d (d \omega_{N-n} + (1-d) u_{N-n+1}(q)),
\end{eqnarray}


where

\begin{eqnarray}
u_{N-n+1}(q)=
\int_{-\infty}^{\infty}u_{N-n+1}(\omega,q)p_{N-n+1}(\omega) d\omega.
\end{eqnarray}


Here

\begin{eqnarray}
q &=&\cases {\omega_{N-1}, &if $r \ge 0$, \cr
q_{N-1} , &if $r < 0$.\cr}
\end{eqnarray}


The one-step-optimal decision

\begin{eqnarray}
d_{N-n}(r) &=&\cases {1, &if $r \ge 0$, \cr
0, &if $r < 0$.\cr}
\end{eqnarray}


The one-step-optimal decision $d_n(r)$ is a function of difference $r=\omega_{N-n}-u_{N-n+1}(q)$ that depends on two parameters $\omega$ and $q$. Therefore, one defines it as a function of critical pairs $(\omega,q)$ depending on time $t=n$.

Solving these recurrent equations one defines the sequence of one-step-optimal decision functions $d_{N-n}(r)$ and the expected utilities $u_{N-n}(\omega, q)$. This should be done for all possible values of $\omega$ and $q$.

The One-Step-Ahead approximation, assuming that $r=\omega_{N-n}-q_{N-n}-l_{N-n}$, reduces the computing time to a level of single marriage case. However, one needs to estimate the approximation error.

5 Optimal Ordering of Supplies

The important application a of sequential statistical decisions is optimal ordering of supplies. Consider, for example, same computer shop that orders $d$ units of RAM (Random Access Memory) at fixed times $t=1,...,N$ dividing the time into $N$ stages of unit length each.


1 Uncertain Demand

One optimizes the supply $d$ when the demand $\omega$ is unknown. Denote by $\omega$ a number of RAM units demanded during one stage. Denote the retail price of one RAM unit by $c_r$ and the wholesale price by $c_w \le c_r$. The utility function-the profit-at the $(N-i)$-th stage

\begin{eqnarray}
v_{N-i}(\omega,d,q)&=&\cases { c_r \omega-c_w d , & if $b \ge \omega$, \cr
c_r \omega-c_w d +L(b-\omega), & if $b < \omega$.\cr}
\end{eqnarray}


Here $c=c_r-c_w$ and $b=d+q$, $q$ is the number of unsold units.
$L$ is the loss due to short supply including fines for breaking supply contracts and/or damages to reputation as an reliable retailer.
All these parameters correspond to the stage $N-i,\ i=0,1,...,N-1$, meaning that
$\omega=\omega_{N-i},\ d=d_{N-i}, q=q_{N-i}$
where $\omega_{N-i}$ is the demand at the stage $N-i$,
$d=d_{N-i} \in D$ is the supply belonging to a feasible set $D=D_{N-i}$ at this stage, and
$q=q_{N-i-1}=d_{N-i-1}+q_{N-i-2}-\omega_{N-i-1}$
is the number of unsold RAM units, left from the previous stage $N-i-1$.
Here and later we omit the lower indices to make expressions shorter.

Denote by $p_{N-i}(\omega)$ the probability of demand $\omega_{N-i}=\omega$ at the stage $N-i$.
Usually the demand $\omega=0,1,2,...$ during the time $\tau$ is described by the Poisson distribution [Tijms, 1994]

\begin{eqnarray}
p_{N-i}(\omega)=
e^{-\lambda\tau} \frac{(\lambda\tau)^{ \omega}}{ \omega!}.
\end{eqnarray}


Considering stages of unit time $\tau=1$ and

\begin{eqnarray}
p_{N-i}(\omega)=
e^{-\lambda} \frac{\lambda^{ \omega}}{ \omega!},
\end{eqnarray}


where $\lambda=\lambda_{N-i}$ is the expected demand at the stage $N-i$. Then the expected utility at the stage $N-i,\ i=0,...,N-1$

\begin{eqnarray}
u_{N-i}(d,q)=\sum_{\omega} v_{N-i}(\omega,d,q)\ p_{N-i}(\omega)
\end{eqnarray}


The maximal expected utility at the last stage $N$

\begin{eqnarray}
u_N(q)=\max_{d \in D_N} \sum_{\omega}v_N(\omega,d,q)\ p_{N}(\omega)
\end{eqnarray}


where $q=q_{N-1}$ is a number of unsold RAM units available at the beginning of the $N$-th stage[*],
$d=d_N$ is a number of RAM units ordered at the beginning of the $N$-th stage,
$\omega=\omega_N$ is the demand at the stage $N$, $D_{N}$ is a set of feasible supplies at the stage $N$.

Denote by $d_{N}(q)$ the optimal decision function maximizing the expected utility (18.55) at any fixed $q$.

Denote by $u_{N-1}(q)$ the maximal expected utility at the $(N-1)$-th stage

\begin{eqnarray}
u_{N-1}(q)=\max_{d \in D_{N-1}} (u_{N-1}(d,q)+\sum_{\omega}\ u_N(q)\ p_{N}(\omega)).
\end{eqnarray}


Note, that the meaning of parameter $q$ depends on the index of the function $u$, for example, $q=q_{N-1}$, if the index is $N$, and $q=q_{N-2}$, if this index is $N-1$, etc.

The maximum of (18.56) is obtained by the optimal decision function $d_{N-1}(q)$
where $q=q_{N-2}$.

Extending (18.56) to $i=0,...,N-1$ one obtains

\begin{eqnarray}
u_{N-i}(q)=\max_{d \in D_{N-i}} (u_{N-i}(d,q)+\sum_{\omega}\ u_N(q)\ p_{N-i+1}(\omega))
\end{eqnarray}


Solving recurrent equations (18.55)-(18.57) one defines the sequence of optimal decision functions
$d_{N-i}(q),\ i=0,...,N-1$
and the optimal expected utility functions $u_{N-i}(q)$.
This is done for all possible remaining RAM units $q$ and for all stages $i=1,...,N-1$.

In the software terms that means calculating recurrently the sequence $i=0,1,...,N-1$ of arrays that shows how the optimal supply $d=d_{N-i}(q)$ and the maximal profit $u=u_{N-i}(q)$ depend on the unsold RAM units $q$, left from the previous stages. This way one reduces computing time considerably, because array access operations need much less time as compared to multi-dimensional optimization.

2 Uncertain Wholesale Price

Denote by $p^c_{N-i}(c_w)=P\{c_w(N-i)=c_w\}$ a probability that the wholesale price $c_w(N-i)$ at the stage $N-i$ is $c_w$. Denote by $C_w(N-i)$ the expected wholesale price $c_w$ at the stage $N-i$. Assuming the Gaussian distribution of the wholesale price logarithm, one obtains the lognormal density function [Tijms, 1994]

\begin{eqnarray}
p^c_{N-i}(c_w)=\frac{1}{\alpha c_w \sqrt{2\pi}}
e^{-\frac{(ln(c_w)-\gamma)^2}{2\alpha^2}}
\end{eqnarray}


where $c_w$ means $c_w(N-i)$ , $\alpha >0$ is the shape parameter, and $\gamma$ is the scale parameter. The expected value $C_w(N-i)$ and the standard $S_w(N-i)$ of the lognormal distribution

\begin{eqnarray}
C_w(N-i)=e^{\gamma+\alpha^2/2} \\
S_w(N-i)=e^{\alpha^2}-1
\end{eqnarray}


The lognormal density is unimodal with maximum at $c_w=exp (\gamma -\alpha^2)$.

In the case of random wholesale prices $c_w=c_w(N-i)$, the profit at the $(N-i)$-th stage

\begin{eqnarray}
v_{N-i}(\omega,d,q)&=&\cases { c_r \omega-c_w d , & if $b \ge
\omega$, \cr
c_r \omega-c_w d +L(b-\omega), & if $b < \omega$.\cr}
\end{eqnarray}


Here $c=c_r-c_w(N-i)$ , $q=q_w(N-i-1)$ is the number of unsold units.
Then the expected utility at the stage $N-i,\ i=0,...,N-1$

\begin{eqnarray}
u_{N-i}(d,q)=\sum_{\omega}\int_0^{\infty} v_{N-i}(\omega,d,q)\ p^c_{N-i}(c_w) dc_w\ p_{N-i}(\omega)
\end{eqnarray}


The maximal expected utility at the last stage $N$

\begin{eqnarray}
u_N(q)=\max_{d \in D_N} \sum_{\omega}\int_0^{\infty}v_N(\omega,d,q)\ p^c_{N}(c_w) dc_w\ p_{N}(\omega)
\end{eqnarray}


where $q=q_{N-1}$ is a number of unsold RAM units left at the beginning of $N$-th stage,
$c_w=c_w(N)$ is the wholesale price at the stage $N$.

Denote by $d_{N}(q)$ the optimal decision function maximizing the expected utility (18.63) at any fixed $q$.

Denote by $u_{N-1}(q)$ the maximal expected utility at the $(N-1)$-th stage

\begin{eqnarray}
u_{N-1}(q)=\max_{d \in D_{N-1}} (u_{N-1}(d,q)+\nonumber \\
\su...
...omega}\int_0^{\infty}\ u_N(q)\ p^c_{N}(c_w) dc_w\ p_{N}(\omega)).
\end{eqnarray}


That is achieved by the optimal decision function $d_{N-1}(q)$
where $q=q_{N-2}$.

Extending to $i=0,...,N-1$ one obtains

\begin{eqnarray}
u_{N-i}(q)=\max_{d \in D_{N-i}} (u_{N-i}(d,q)+\nonumber \\
\su...
...int_0^{\infty}\ u_N(q)\ p^c_{N-i+1}(c_w) dc_w\ p_{N-i+1}(\omega))
\end{eqnarray}


Solving recurrent equations (18.63)-(18.65) one defines the sequence of optimal decision functions
$d_{N-i}(q),\ i=0,...,N-1$
and the optimal expected utilities $u_{N-i}(q)$.
This is done for all possible numbers of RAM units $q$ available at the beginning of stages $i=1,...,N-1$.

3 Market Elasticity

We call by Market Elasticity the relation $\Omega=\Omega(c_r)$ of expected demand $\Omega$ to the retail price $c_r \in C$, where $C$ is a set of feasible retail prices. A simple approximation is the exponential function

\begin{eqnarray}
\Omega(c_r)=be^{-\beta c_r}
\end{eqnarray}


where $b=b_{N-i}$ is the saturation demand, $\beta=\beta_{N-i}$ is the elasticity parameter, $c_r= c_r(N-i) \in C_{N-i}$ is the retail price, and $C_{N-i}$ is a set of feasible retail prices, all at the stage $N-i$.

In this case the profit function at the $(N-i)$-th stage

\begin{eqnarray}
v_{N-i}(\omega,d,q,c_r)&=&\cases {c_r \omega-c_w d , & if $b \g...
...mega$, \cr
c_r \omega-c_w d +L(b-\omega), & if $b < \omega$.\cr}
\end{eqnarray}


Here $c$ denotes the difference $c=c_r(N-i)-c_w(N-i)$ of retail and wholesale prices at the stage $N-i$.
The expected utility at the stage $N-i,\ i=0,...,N-1$

\begin{eqnarray}
u_{N-i}(d,q,c_r)=\sum_{\omega}\int_0^{\infty} v_{N-i}(\omega,d,q)\ p^c_{N-i}(c_w) dc_w\ p_{N-i}(\omega)
\end{eqnarray}


where $c_r=c_r(N-i)$.
The maximal expected utility at the last stage $N$

\begin{eqnarray}
u_N(q)=\max_{d \in D_N , c_r \in C_N} \sum_{\omega}\int_0^{\infty}v_N(\omega,d,q,c_r)\ p^c_{N}(c_w) dc_w\ p_{N}(\omega)
\end{eqnarray}


Denote by $d_{N}(q)=(d_{N}(q) , c_{N}(q))$, where $q$ denotes $q_{N-1}$, the two-dimensional optimal decision function maximizing the expected utility (18.69) at any fixed $q$.

Denote by $u_{N-1}(q)$ the maximal expected utility at the $(N-1)$-th stage

\begin{eqnarray}
u_{N-1}(q)=\max_{d \in D_{N-1},c_r \in C_{N-1}} (u_{N-1}(d,q,c_...
...{\infty}\sum_{\omega}\ u_N(q)\ p^c_{N}(c_w) dc_w\ p_{N}(\omega)).
\end{eqnarray}


That is achieved by the two-dimensional optimal decision function $d_{N-1}(q)=(d_{N-1}(q),c_{N-1}(q))$
where $q$ means $q_{N-2}$.

Extending (18.70) to $i=0,...,N-1$ one obtains

\begin{eqnarray}
u_{N-i}(q)=\max_{d \in D_{N-i},c_r \in C_{N-i}} (u_{N-i}(d,c_r,...
...int_0^{\infty}\ u_N(q)\ p^c_{N-i+1}(c_w) dc_w\ p_{N-i+1}(\omega))
\end{eqnarray}


Solving recurrent equations (18.69)-(18.71) one defines the sequence of decision functions
$d_{N-i}(q)=(d_{N-i}(q) ,c_{N-i}(q))\ i=0,...,N-1$
and the optimal expected profits $u_{N-i}(q)$.
This is done for all possible numbers $q$ of RAM units $q$ available at the beginning of stages $i=1,...,N-1$.

In the case of market elasticity, to represent maximal profits $u_{N-i}(q)$ and two-dimensional optimal decisions $d_{N-i}(q)=(d_{N-i}(q) ,c_{N-i}(q))\ i=0,...,N-1$ as functions of remaining RAM units $q$ one needs to calculate a sequence of $N$ arrays defining the sequence of two-dimensional decision functions. This is difficult, therefore, one starts from the simplest case of uncertain demand (see section 5.1), as usual.

Note that in the actual calculations of expressions (18.63)-(18.71) the integrals are replaced by sums and the densities by probabilities using expressions similar to those in the section 3.2.

In the demonstration version the "horizon" $N$ means the estimated end of business. In the expert version one uses the "moving horizon" $N_o < N$, repeating the optimization after each stage. This way one simplifies calculations and updates data.

The recurrent equations (18.55)-(18.57) include Bride problems as special cases with only two feasible decisions $D=\{0,1\}$ and different utility functions (18.51).

6 Software Examples

Several interactive versions of the Bride problem are implemented as Java applets. These, and the corresponding $C++$ software are on web-sites (see section 4).

1 Single "Marriage"

For this example one selects the section "Dicrete, Linear and Dynamic Programming" in some web-site and opens "Bride-1". Figure 18.1 shows the input window of the Bride problem [Kajokas, 1997]
Figure 18.2 shows simulated results of the "optimal marriage".
Figure 18.3 shows the optimal decision function defining how the critical "goodness" of a groom depends on his number.
Figure 18.1: The input window of the Bride problem.
\begin{figure}\centerline{ \epsfig{file=bride2.eps,width=12.0cm}
}\protect\end{figure}
Figure 18.2: The result of statistical simulation of the Bride problem.
\begin{figure}\centerline{
\epsfig{file=bride1.eps,width=12.0cm}
}\protect\end{figure}
Figure 18.3: The optimal decision function of the Bride problem.
\begin{figure}\centerline{
\epsfig{file=bride3.eps,height=17.0cm,width=12.0cm}
}\protect\end{figure}

1 "Buy-a-Car" Version

The "Buy-a-Car" version applies modified "Single-Marriage" software. Figure 18.4 shows the help window of the "Buy-a-Car" version of the Bride problem [Mikenas, 1998].
Figure 18.5 shows the input window of the "Buy-a-Car" problem.
Figure 18.6 shows the output window of the "Buy-a-Car" problem.
Figure 18.7 shows the optimal decision function of the "Buy-a-Car" problem.
Figure 18.4: The help window solving the "Buy-a-Car" problem.
\begin{figure}\centerline{
\epsfig{file=autohelp.eps,width=12.0cm}
}\protect\end{figure}
Figure 18.5: The input window solving the "Buy-a-Car" problem.
\begin{figure}\centerline{
\epsfig{file=autoinp.eps,width=12.0cm}
}\protect\end{figure}
Figure 18.6: The output window of the "Buy-a-Car" problem.
\begin{figure}\centerline{
\epsfig{file=autoout.eps,height=17.0cm,width=12.0cm}
}\protect\end{figure}
Figure 18.7: The optimal decision function of the "Buy-a-Car" problem.
\begin{figure}\centerline{
\epsfig{file=autograph.eps,height=17.0cm,width=12.0cm}}\protect\end{figure}

The "Buy-a-Car" algorithm is just a first approximation. It should be improved. For example, improving the By-a-PC model one should

"No-Return" means that one cannot return to a situation which he or she rejected previously. The "No-Return" rule is essential in the dynamic programming. One considers "Return" cases in the framework of discrete or non-linear programming. That requires more computing, as usual.

2 Many "Marriages"

First one selects the problem "Bride-Multi" in the section "Discrete, Dynamic and Linear Programming ". The program [Damasevicius, 2000] starts by clicking the button $Start$ at the end of the $Bride Problem$ page. Data is entered using the operation page (see Figure 18.8). Optimization starts by clicking the "Calculate" button.

Figure 18.8: Input page.
\begin{figure}\centerline{
\epsfig{file=exinp.eps,width=12.0cm}
}\end{figure}
The decision function ( see Figure 18.9) is defined by Bellman equations (18.33)(18.50).
Figure 18.9: Example of a decision function.
\begin{figure}\centerline{
\epsfig{file=exdec.eps,width=12.0cm}
}\end{figure}
The "State-of Nature" generated by Monte Carlo simulation of probability distributions (18.1) (18.2) is also shown in Figure 18.9. The jumps of the decision function correspond to two marriages. In addition, Figure 18.10 shows the best "State-of-Nature" .
Figure 18.10: Monte Carlo simulation of the"State of Nature".
\begin{figure}\centerline{
\epsfig{file=exgroom.eps,width=12.0cm}
}\end{figure}

jonas mockus 2004-03-20