Introduction to Stochastic Processes
Domain Definition
$\Omega$: finite state space
P: transition matrix $| \Omega | \times |\Omega|$ we will see it multiple time (it mean what is the probability of moving from state $S_x$ to state $S_y$).
A sequence of random variables ($X_0,X_1,X_2,\ldots) is a Markov chain with state $\Omega$ and transition matix P if for all n $\geq$ 0, and all sequences ($x_0, x_1,\ldots,x_n,x_{n+1}), we have that
$\mathbb{P}[X_{n+1} = x_{n+1} | X_0 = x_0,\ldots,X_n=x_n] = \mathbb{P}[X_{n+1} = x_{n+1} | X_n = x_n] = P(x_n, x_{n+1})$.
Gambler's ruin
The gambler's situation can be modeled by a Markov chain on the state space {0, 1, ..., N}:
- $X_0$: initial money in purse
- $X_n$: the gambler's fortune at time n
- $\mathbb{P}[X_{n+1} = X_n + 1 | X_n] = 1/2$
- $\mathbb{P}[X_{n+1} = X_n - 1 | X_n] = 1/2$
- The states 0 and N are absorbin.
- $\tau$: the time that the gambler stops.
Theorem: Assume that $X_0 = k for some 0 \leq k \geq N$. Then
$\mathbb{P}[X_\tau = N] = \frac{k}{N}, \mathbb{E}[\tau] = k(N - k).$
Coupon collecting
A company issues N different types of coupons. A collector desires a complete set.
Question: How many coupons must he obtain so that his collection contains all N types.
Assumption: each coupon is equally likely to be each of the N types.
The collector's situation can be modeled by a Markov chain on the state space {0, 1, ..., N}:
- $X_0 = 0$
- $X_n: the number of different types among the collector's first n coupons.
- $\mathbb{P}[X_{n+1} = k + 1 | X_n = k] = (N - k) / N$
- $\mathbb{P}[X_{n+1} = k | X_n = k] = k/N$
- $\tau$: the first time that the collector obtains all N types.
Answer to the question.
Theorem $\mathbb{E}[\tau] = N \sum_{k=1}^N \frac{1}{k} \approx N\log{N}$
A more precise answer.
Theorem For any c > 0, we have that $\mathbb{P}[\tau > N\log{N} + (c \times N)] \leq e^{c}$
Notations
$\Omega$: state space
$\mu : measure on \Omega$
P,Q: transition matrices $|\Omega| \times |\Omega|$
f: function on $\Omega$
- $\mu P: measure on \Omega$
- PQ: transition matrix
- Pf: function on $\Omega$
Associative $(\mu P)Q = \mu(PQ) (PQ)f = P(Qf)
Consider a Markov chain with state space $\Omega$ and transition matrix P.
Recall that $\mathbb{P}[X_{n+1} = y | X_n = x] = P(x,y)$
- $\mu_0: the distribution of X_0$
- $\mu_n: the distribution of X_n$
Then we have that
- $\mu_{n+1} = \mu_nP$.
- $\mu_n = \mu_0P^n$.
- $\mathbb{E}[f(X_n)] = \mu_0P^nf$.
Stationary distribution
Consider a Markov chain with state space $\Omega$ and transition matrix P.
We call a probability measure $\pi$ is stationary if $\pi = \pi P$.
if $\pi$ is stationary and the initial measure $\mu_0 equals \pi$, then $\mu_n = \pi, \forall n$.
Note: as seen so far on markov model we can represent it on a graph as we will see now.
Random walks on graphs
Def_1: A graph G = (V,E) consists of a vertex set V and an edge set E:
- V: set of vertices
- E: set of pairs of vertices
- when (x,y) $\in$ E, we write x ~ y : x and y are joined by an edge. We say y is a neighbor of x.
- For x $\in$ V, deg(x) : the number of neighbors of x.
Def_2: Given a graph G = (V, E), we define simple random walk on G to be the Markov chain with state space V and transition matrix :
P(x, y) =$\begin{cases} \dfrac{1}{\deg(x)} & \text{if } y \sim x \\ 0 & \text{otherwise} \end{cases}$
Theorem
Define $\pi (X) = \frac{\deg(x)}{2|E|} \forall x \in V$.
The $\pi$ is a stationary distribution for the simple random walk on the graph.
Comments
Post a Comment