Skip to content

Commit c7f20d2

Browse files
committed
Update markov_chains_II.md
This pull request update the lecture markov_chains_II.md based on #460 In particular, the changes include: - "We discussed uniqueness of stationary distributions our [earlier lecture on Markov chains](https://intro.quantecon.org/markov_chains_I.html)" -> We discussed uniqueness of stationary distributions in our [earlier lecture on Markov chains](https://intro.quantecon.org/markov_chains_I.html). - Unify`mathbb` for indicator function. - "an initial." -> "an initial". - Fix capitalization of the subtitle. - Discussed before -> discussed in [add doc link]. -`1 + np.arrange(ts_length)` -> `np.arrange(1, ts_length + 1)`. - Change the time length for the periodic chain. - Political institution mode -> political institution model. - Remove the `alpha` parameter for 33.3.4 graph and remove the `figsize`. - Add a seed for the `mc.simulate` (@longye-tian). - Delete https://intro.quantecon.org/markov_chains_II.html#expectations-of-geometric-sums. - Change the graph to the simple one in the exercise. - Change the number in the range to len(P)
1 parent c6fa84d commit c7f20d2

File tree

1 file changed

+31
-63
lines changed

1 file changed

+31
-63
lines changed

lectures/markov_chains_II.md

Lines changed: 31 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ We'll come back to this a bit later.
142142

143143
### Irreducibility and stationarity
144144

145-
We discussed uniqueness of stationary distributions our {doc}`earlier lecture on Markov chains <markov_chains_I>`
145+
We discussed uniqueness of stationary distributions in our earlier lecture {doc}`markov_chains_I`.
146146

147147
There we {prf:ref}`stated <mc_po_conv_thm>` that uniqueness holds when the transition matrix is everywhere positive.
148148

@@ -174,16 +174,15 @@ distribution, then, for all $x \in S$,
174174
```{math}
175175
:label: llnfmc0
176176
177-
\frac{1}{m} \sum_{t = 1}^m \mathbf{1}\{X_t = x\} \to \psi^*(x)
177+
\frac{1}{m} \sum_{t = 1}^m \mathbb{1}\{X_t = x\} \to \psi^*(x)
178178
\quad \text{as } m \to \infty
179179
```
180180
181181
````
182182

183183
Here
184184

185-
* $\{X_t\}$ is a Markov chain with stochastic matrix $P$ and initial.
186-
distribution $\psi_0$
185+
* $\{X_t\}$ is a Markov chain with stochastic matrix $P$ and initial distribution $\psi_0$
187186
* $\mathbb{1} \{X_t = x\} = 1$ if $X_t = x$ and zero otherwise.
188187

189188
The result in [theorem 4.3](llnfmc0) is sometimes called **ergodicity**.
@@ -196,16 +195,16 @@ This gives us another way to interpret the stationary distribution (provided irr
196195

197196
Importantly, the result is valid for any choice of $\psi_0$.
198197

199-
The theorem is related to {doc}`the Law of Large Numbers <lln_clt>`.
198+
The theorem is related to {doc}`the law of large numbers <lln_clt>`.
200199

201200
It tells us that, in some settings, the law of large numbers sometimes holds even when the
202201
sequence of random variables is [not IID](iid_violation).
203202

204203

205204
(mc_eg1-2)=
206-
### Example: Ergodicity and unemployment
205+
### Example: ergodicity and unemployment
207206

208-
Recall our cross-sectional interpretation of the employment/unemployment model {ref}`discussed above <mc_eg1-1>`.
207+
Recall our cross-sectional interpretation of the employment/unemployment model {ref}`discussed before <mc_eg1-1>`.
209208

210209
Assume that $\alpha \in (0,1)$ and $\beta \in (0,1)$, so that irreducibility holds.
211210

@@ -235,7 +234,7 @@ Let's denote the fraction of time spent in state $x$ over the period $t=1,
235234
\ldots, n$ by $\hat p_n(x)$, so that
236235

237236
$$
238-
\hat p_n(x) := \frac{1}{n} \sum_{t = 1}^n \mathbf{1}\{X_t = x\}
237+
\hat p_n(x) := \frac{1}{n} \sum_{t = 1}^n \mathbb{1}\{X_t = x\}
239238
\qquad (x \in \{0, 1, 2\})
240239
$$
241240

@@ -263,9 +262,9 @@ fig, ax = plt.subplots()
263262
ax.axhline(ψ_star[x], linestyle='dashed', color='black',
264263
label = fr'$\psi^*({x})$')
265264
# Compute the fraction of time spent in state 0, starting from different x_0s
266-
for x0 in range(3):
265+
for x0 in range(len(P)):
267266
X = mc.simulate(ts_length, init=x0)
268-
p_hat = (X == x).cumsum() / (1 + np.arange(ts_length))
267+
p_hat = (X == x).cumsum() / np.arange(1, ts_length+1)
269268
ax.plot(p_hat, label=fr'$\hat p_n({x})$ when $X_0 = \, {x0}$')
270269
ax.set_xlabel('t')
271270
ax.set_ylabel(fr'$\hat p_n({x})$')
@@ -309,14 +308,13 @@ The following figure illustrates
309308
```{code-cell} ipython3
310309
P = np.array([[0, 1],
311310
[1, 0]])
312-
ts_length = 10_000
311+
ts_length = 100
313312
mc = qe.MarkovChain(P)
314313
n = len(P)
315314
fig, axes = plt.subplots(nrows=1, ncols=n)
316315
ψ_star = mc.stationary_distributions[0]
317316
318317
for i in range(n):
319-
axes[i].set_ylim(0.45, 0.55)
320318
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color='black',
321319
label = fr'$\psi^*({i})$')
322320
axes[i].set_xlabel('t')
@@ -326,10 +324,11 @@ for i in range(n):
326324
for x0 in range(n):
327325
# Generate time series starting at different x_0
328326
X = mc.simulate(ts_length, init=x0)
329-
p_hat = (X == i).cumsum() / (1 + np.arange(ts_length))
327+
p_hat = (X == i).cumsum() / np.arange(1, ts_length+1)
330328
axes[i].plot(p_hat, label=f'$x_0 = \, {x0} $')
331329
332330
axes[i].legend()
331+
plt.tight_layout()
333332
plt.show()
334333
```
335334

@@ -341,7 +340,7 @@ However, the distribution at each state does not.
341340

342341
### Example: political institutions
343342

344-
Let's go back to the political institutions mode with six states discussed {ref}`in a previous lecture <mc_eg3>` and study ergodicity.
343+
Let's go back to the political institutions model with six states discussed {ref}`in a previous lecture <mc_eg3>` and study ergodicity.
345344

346345

347346
Here's the transition matrix.
@@ -374,19 +373,18 @@ P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
374373
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
375374
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
376375
377-
ts_length = 10_000
376+
ts_length = 2500
378377
mc = qe.MarkovChain(P)
379378
ψ_star = mc.stationary_distributions[0]
380-
fig, ax = plt.subplots(figsize=(9, 6))
381-
X = mc.simulate(ts_length)
379+
fig, ax = plt.subplots()
380+
X = mc.simulate(ts_length, random_state=1)
382381
# Center the plot at 0
383-
ax.set_ylim(-0.25, 0.25)
384-
ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
382+
ax.axhline(linestyle='dashed', lw=2, color='black')
385383
386384
387385
for x0 in range(len(P)):
388386
# Calculate the fraction of time for each state
389-
p_hat = (X == x0).cumsum() / (1 + np.arange(ts_length))
387+
p_hat = (X == x0).cumsum() / np.arange(1, ts_length+1)
390388
ax.plot(p_hat - ψ_star[x0], label=f'$x = {x0+1} $')
391389
ax.set_xlabel('t')
392390
ax.set_ylabel(r'$\hat p_n(x) - \psi^* (x)$')
@@ -395,31 +393,6 @@ ax.legend()
395393
plt.show()
396394
```
397395

398-
399-
400-
### Expectations of geometric sums
401-
402-
Sometimes we want to compute the mathematical expectation of a geometric sum, such as
403-
$\sum_t \beta^t h(X_t)$.
404-
405-
In view of the preceding discussion, this is
406-
407-
$$
408-
\mathbb{E}
409-
\left[
410-
\sum_{j=0}^\infty \beta^j h(X_{t+j}) \mid X_t
411-
= x
412-
\right]
413-
= x + \beta (Ph)(x) + \beta^2 (P^2 h)(x) + \cdots
414-
$$
415-
416-
By the {ref}`Neumann series lemma <la_neumann>`, this sum can be calculated using
417-
418-
$$
419-
I + \beta P + \beta^2 P^2 + \cdots = (I - \beta P)^{-1}
420-
$$
421-
422-
423396
## Exercises
424397

425398
````{exercise}
@@ -508,14 +481,13 @@ Part 2:
508481
```{code-cell} ipython3
509482
ts_length = 1000
510483
mc = qe.MarkovChain(P)
511-
fig, ax = plt.subplots(figsize=(9, 6))
512-
X = mc.simulate(ts_length)
513-
ax.set_ylim(-0.25, 0.25)
514-
ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
484+
fig, ax = plt.subplots()
485+
X = mc.simulate(ts_length, random_state=1)
486+
ax.axhline(linestyle='dashed', lw=2, color='black')
515487
516-
for x0 in range(8):
488+
for x0 in range(len(P)):
517489
# Calculate the fraction of time for each worker
518-
p_hat = (X == x0).cumsum() / (1 + np.arange(ts_length))
490+
p_hat = (X == x0).cumsum() / np.arange(1, ts_length+1)
519491
ax.plot(p_hat - ψ_star[x0], label=f'$x = {x0+1} $')
520492
ax.set_xlabel('t')
521493
ax.set_ylabel(r'$\hat p_n(x) - \psi^* (x)$')
@@ -555,7 +527,7 @@ In other words, if $\{X_t\}$ represents the Markov chain for
555527
employment, then $\bar X_m \to p$ as $m \to \infty$, where
556528
557529
$$
558-
\bar X_m := \frac{1}{m} \sum_{t = 1}^m \mathbf{1}\{X_t = 0\}
530+
\bar X_m := \frac{1}{m} \sum_{t = 1}^m \mathbb{1}\{X_t = 0\}
559531
$$
560532
561533
This exercise asks you to illustrate convergence by computing
@@ -582,31 +554,27 @@ As $m$ gets large, both series converge to zero.
582554

583555
```{code-cell} ipython3
584556
α = β = 0.1
585-
ts_length = 10000
557+
ts_length = 3000
586558
p = β / (α + β)
587559
588560
P = ((1 - α, α), # Careful: P and p are distinct
589561
( β, 1 - β))
590562
mc = qe.MarkovChain(P)
591563
592-
fig, ax = plt.subplots(figsize=(9, 6))
593-
ax.set_ylim(-0.25, 0.25)
594-
ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
564+
fig, ax = plt.subplots()
565+
ax.axhline(linestyle='dashed', lw=2, color='black')
595566
596-
for x0, col in ((0, 'blue'), (1, 'green')):
567+
for x0 in range(len(P)):
597568
# Generate time series for worker that starts at x0
598569
X = mc.simulate(ts_length, init=x0)
599570
# Compute fraction of time spent unemployed, for each n
600-
X_bar = (X == 0).cumsum() / (1 + np.arange(ts_length))
571+
X_bar = (X == 0).cumsum() / np.arange(1, ts_length+1)
601572
# Plot
602-
ax.fill_between(range(ts_length), np.zeros(ts_length), X_bar - p, color=col, alpha=0.1)
603-
ax.plot(X_bar - p, color=col, label=f'$x_0 = \, {x0} $')
604-
# Overlay in black--make lines clearer
605-
ax.plot(X_bar - p, 'k-', alpha=0.6)
573+
ax.plot(X_bar - p, label=f'$x_0 = \, {x0} $')
606574
ax.set_xlabel('t')
607575
ax.set_ylabel(r'$\bar X_m - \psi^* (x)$')
608576
609-
ax.legend(loc='upper right')
577+
ax.legend()
610578
plt.show()
611579
```
612580

0 commit comments

Comments
 (0)