Notes on 130C: Stochastic Processes

4 minute read

This notes contains some of the notes I used in UC Irvine Math 130C course.

Guessing the stationary distribution

This is a generalization of Ross’s Intro to Probability Models book (12th ed) problem 4.73.

Let $X$ be a reversible Markov chain, and let $C$ be a non-empty subset of the state space $S$. Define the Markov chain $Y$ on $S$ by the transition matrix $Q = (q_{ij})$ where \(q_{i j}=\left\{\begin{array}{ll}\beta p_{i j} & \text { if } i \in C \text { and } j \notin C \\[5pt] p_{i j} & \text { otherwise }\end{array}\right.\) for $i \neq j$, and where $0 <\beta< 1$ is a constant. The diagonals $(q_{ii})$ are arranged so that $Y$ is aperiodic. Find $Y$’s stationary distribution and show it is time-reversible. Describe the situation in the limit as $\beta\to 0^+$.

Solution: We guess that the stationary distribution $(q_i)$ of $Y$ results from the stationary distribution $(p_i)$ of $X$ by

\[q_{i}=\left\{\begin{array}{ll}a p_{i} & \text { if } i \in C \\ b p_{i} & \text { otherwise }\end{array}\right.\]

for some constants $a, b > 0$. We have $p_i p_{ij} = p_j p_{ji}$ ($X$ being time-reversible), and we need $q_i q_{ij} = q_j q_{ji}$ for $i \neq j$. If $i, j \in C$ then $q_{ij} = p_{ij}$ , $q_{ji} = p_{ji}$, $q_i = ap_i$ and $q_j = ap_j$ , therefore $q_i q_{ij} = ap_i p_{ij} = ap_j p_{ji} = q_j q_{ji}$. The case $i, j \not \in C$ is similar. If $i \in C$ but $j \not\in C$ then $q_{ij} = \beta p_{ij}$, $q_{ji} = p_{ji}$, $qi = ap_i$ and $qj = bp_j$ ; in order to get $a p_i \beta p_{ij} = bp_j \cdot p_{ji}$, we need

\[a \cdot \beta = b.\]

The last case, $i \not \in C$ but $j \in C$. Here $q_{ij} = p_{ij}$ , $q_{ji} =\beta p_{ji}$, $q_i = bp_i$ and $q_j = ap_j$. The equality $bp_i \cdot p_{ij} = ap_j\cdot \beta p_{ji}$ follows from $a\beta = b$. It remains to choose $a, b$ such that $a\beta = b$ and $q_i = 1$;

\[1=\sum_{i} q_{i}=\sum_{i \in C} q_{i}+\sum_{i \notin C} q_{i}=a \sum_{i \in C} p_{i}+b \sum_{i \notin C} p_{i}=a\left(\sum_{i \in C} p_{i}+\beta \sum_{i \notin C} p_{i}\right),\]

and

\(a=\frac{1}{\sum_{i \in C} p_{i}+\beta \sum_{i \notin C} p_{i}}, \quad b=\frac{\beta}{\sum_{i \in C} p_{i}+\beta \sum_{i \notin C} p_{i}}\) If $\beta$ is small then $b$ is small, thus, the distribution nearly concentrates on $C$. The exit from $C$ becomes hard, while return to $C$ remains easy.

Conditional distribution of Brownian motion.

This is modified from a question on Math.Stackexchange and provides some heuristics on the probability related to Brownian bridge.

Question: given a standard Brownian motion $W_t$, what is $\mathbb{P} (W_T>0|W_{2T}>0)$?

Heuristics: the probability of interest is $\mathbb{P}(W_{T}>0,W_{2T}>0)/\mathbb{P}(W_{2T}>0)$. The numerator depends on $T$. If $T$ is large, then the gap between the two “observations” at time $t=T$ and $t=2T$ is large, and so we don’t expect that the value at time $t=T$ tells us much about the value at time $t=2T$. In contrast, if $T$ is small, then strict positivity of $W_{T}$ implies that also $W_{2T}$ is strictly positive with “high” probability (because the time difference $(2T)-T$ between the observations is small).

Solution: Fix $T>0$. The restarted process

\[B_t := X_{T+t}-W_{T}, \qquad t \geq 0,\]

is a Brownian motion which is independent of $(W_{T})_{t \leq T}$. Moreover,

\[\mathbb{P}(W_{T}>0,W_{2T}>0) = \mathbb{P}(W_{T}>0, B_T>-W_{T}).\]

Using the independence of $(W_{T}){t \leq T}$ and $(B_t){t \geq 0}$, it follows from the tower property of conditional expectation that

\[\mathbb{P}(W_{T}>0,W_{2T}>0) = \mathbb{E} \big[ \mathbb{E}(1_{\{W_{T}>0\}} 1_{\{B_T>-W_{T}\}} \mid W_{T}) \big]= \mathbb{E}(1_{\{W_{T}>0\}} f(W_{T}))\]

where

\[f(x) := \mathbb{P}(B_T>-x), \qquad x \in \mathbb{R}.\]

If we denote by $\Phi$ the cdf of the standard Gaussian distribution, then it follows from $B_T \sim N(0,T)$ that

\[f(x) = \mathbb{P}(\sqrt{T}B_1>-x) = 1- \mathbb{P}\left(B_1 \leq - \frac{x}{\sqrt{T}}\right) =1- \Phi \left( - \frac{x}{\sqrt{T}} \right).\]

Hence,

\[\mathbb{P}(W_{T}>0,W_{2T}>0) = \underbrace{\mathbb{P}(W_{T}>0)}_{=1/2} - \mathbb{E} \left( 1_{\{W_{T}>0\}} \Phi \left(- \frac{W_{T}}{\sqrt{T}} \right) \right).\]

Writing $\phi$ for the pdf of the standard Gaussian distribution, we get

\[\mathbb{P}(W_{T}>0,W_{2T}>0) = \frac{1}{2} - \int_0^{\infty} \Phi \left(- \frac{x}{\sqrt{T}} \right) \phi(x) \, dx.\]

The latter integral can be calculated explicitly, see here,

\[\mathbb{P}(W_{T}>0,W_{2T}>0) = \frac{1}{2} - \left[ \frac{1}{4} + \frac{1}{2\pi} \arctan \left(-\frac{1}{\sqrt{T}} \right) \right].\]

Since $\arctan(x)=\arctan(-x)$ and $\arctan(1/x) = \pi/2 - \arctan(x)$ for $x>0$, we obtain

\[\mathbb{P}(W_{T}>0,W_{2T}>0) = \frac{1}{2} - \frac{1}{2\pi} \arctan (\sqrt{T}).\tag{1}\]

Remark: It’s easy to miss some constants/signs when doing such calculations so let us briefly check whether our final result is reasonable. Since $\arctan(x) \geq 0$ for $x \geq 0$, we find from $(1)$ that $\mathbb{P}(W_{T}>0,W_{2T}>0) \leq 1/2$. This is what we would expect anyway since

\[\mathbb{P}(W_{T}>0,W_{2T}>0) \leq \mathbb{P}(W_{T}>0) = \frac{1}{2}.\]

Moreover, $x \mapsto \arctan(x)$ is increasing, and so $(1)$ shows that the probability $\mathbb{P}(W_{T}>0,W_{2T}>0)$ is decreasing in $T$.

Comments