Skip to content

Commit e1f0627

Browse files
committed
Merge branch 'improve-mc-eigen' of https://github.com/QuantEcon/lecture-python-intro into improve-mc-eigen
2 parents f23e1a4 + c4a6fdf commit e1f0627

File tree

2 files changed

+43
-52
lines changed

2 files changed

+43
-52
lines changed

lectures/_toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,7 @@ parts:
3535
- file: olg
3636
- file: markov_chains_I
3737
- file: markov_chains_II
38+
- file: eigen_II
3839
- file: commod_price
3940
- caption: Optimization
4041
numbered: true
@@ -46,7 +47,6 @@ parts:
4647
- caption: Modeling in Higher Dimensions
4748
numbered: true
4849
chapters:
49-
- file: eigen_II
5050
- file: input_output
5151
- file: lake_model
5252
- file: asset_pricing

lectures/markov_chains_II.md

Lines changed: 42 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ jupytext:
44
extension: .md
55
format_name: myst
66
format_version: 0.13
7-
jupytext_version: 1.14.4
7+
jupytext_version: 1.14.5
88
kernelspec:
99
display_name: Python 3 (ipykernel)
1010
language: python
@@ -37,24 +37,19 @@ to be installed on your computer. Installation instructions for graphviz can be
3737
[here](https://www.graphviz.org/download/)
3838
```
3939

40-
4140
## Overview
4241

43-
This lecture continues on from our {doc}`earlier lecture on Markov chains
44-
<markov_chains_I>`.
45-
42+
This lecture continues our journey in Markov chains.
4643

47-
Specifically, we will introduce the concepts of irreducibility and ergodicity, and see how they connect to stationarity.
44+
Specifically, we will introduce irreducibility and ergodicity, and how they connect to stationarity.
4845

49-
Irreducibility describes the ability of a Markov chain to move between any two states in the system.
46+
Irreducibility is a concept that describes the ability of a Markov chain to move between any two states in the system.
5047

5148
Ergodicity is a sample path property that describes the behavior of the system over long periods of time.
5249

53-
As we will see,
50+
The concepts of irreducibility and ergodicity are closely related to the idea of stationarity.
5451

55-
* an irreducible Markov chain guarantees the existence of a unique stationary distribution, while
56-
* an ergodic Markov chain generates time series that satisfy a version of the
57-
law of large numbers.
52+
An irreducible Markov chain guarantees the existence of a unique stationary distribution, while an ergodic Markov chain ensures that the system eventually reaches its stationary distribution, regardless of its initial state.
5853

5954
Together, these concepts provide a foundation for understanding the long-term behavior of Markov chains.
6055

@@ -179,8 +174,6 @@ mc = qe.MarkovChain(P, ('poor', 'middle', 'rich'))
179174
mc.is_irreducible
180175
```
181176

182-
+++ {"user_expressions": []}
183-
184177
It might be clear to you already that irreducibility is going to be important
185178
in terms of long-run outcomes.
186179

@@ -270,19 +263,19 @@ In view of our latest (ergodicity) result, it is also the fraction of time that
270263

271264
Thus, in the long run, cross-sectional averages for a population and time-series averages for a given person coincide.
272265

273-
This is one aspect of the concept of ergodicity.
266+
This is one aspect of the concept of ergodicity.
274267

275268

276269
(ergo)=
277270
### Example 2
278271

279-
Another example is the Hamilton dynamics we {ref}`discussed before <mc_eg2>`.
272+
Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed before <mc_eg2>`.
280273

281-
The {ref}`graph <mc_eg2>` of the Markov chain shows it is irreducible
274+
The diagram of the Markov chain shows that it is **irreducible**.
282275

283-
Therefore, we can see the sample path averages for each state (the fraction of
284-
time spent in each state) converges to the stationary distribution regardless of
285-
the starting state
276+
Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state
277+
278+
Let's denote the fraction of time spent in state $x$ at time $t$ in our sample path as $\hat p_t(x)$ and compare it with the stationary distribution $\psi^* (x)$
286279

287280
```{code-cell} ipython3
288281
P = np.array([[0.971, 0.029, 0.000],
@@ -291,27 +284,28 @@ P = np.array([[0.971, 0.029, 0.000],
291284
ts_length = 10_000
292285
mc = qe.MarkovChain(P)
293286
n = len(P)
294-
fig, axes = plt.subplots(nrows=1, ncols=n)
287+
fig, axes = plt.subplots(nrows=1, ncols=n, figsize=(15, 6))
295288
ψ_star = mc.stationary_distributions[0]
296289
plt.subplots_adjust(wspace=0.35)
297290
298291
for i in range(n):
299-
axes[i].grid()
300-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
292+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color='black',
301293
label = fr'$\psi^*({i})$')
302294
axes[i].set_xlabel('t')
303-
axes[i].set_ylabel(f'fraction of time spent at {i}')
295+
axes[i].set_ylabel(fr'$\hat p_t({i})$')
304296
305297
# Compute the fraction of time spent, starting from different x_0s
306298
for x0, col in ((0, 'blue'), (1, 'green'), (2, 'red')):
307299
# Generate time series that starts at different x0
308300
X = mc.simulate(ts_length, init=x0)
309-
X_bar = (X == i).cumsum() / (1 + np.arange(ts_length, dtype=float))
310-
axes[i].plot(X_bar, color=col, label=f'$x_0 = \, {x0} $')
301+
p_hat = (X == i).cumsum() / (1 + np.arange(ts_length, dtype=float))
302+
axes[i].plot(p_hat, color=col, label=f'$x_0 = \, {x0} $')
311303
axes[i].legend()
312304
plt.show()
313305
```
314306

307+
Note that the convergence to the stationary distribution regardless of the starting point $x_0$.
308+
315309
### Example 3
316310

317311
Let's look at one more example with six states {ref}`discussed before <mc_eg3>`.
@@ -330,11 +324,9 @@ P :=
330324
$$
331325

332326

333-
The {ref}`graph <mc_eg3>` for the chain shows all states are reachable,
334-
indicating that this chain is irreducible.
327+
The graph for the chain shows states are densely connected indicating that it is **irreducible**.
335328

336-
Similar to previous examples, the sample path averages for each state converge
337-
to the stationary distribution.
329+
Then we visualize the difference between $\hat p_t(x)$ and the stationary distribution $\psi^* (x)$
338330

339331
```{code-cell} ipython3
340332
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
@@ -351,20 +343,22 @@ fig, ax = plt.subplots(figsize=(9, 6))
351343
X = mc.simulate(ts_length)
352344
# Center the plot at 0
353345
ax.set_ylim(-0.25, 0.25)
354-
ax.axhline(0, linestyle='dashed', lw=2, color = 'black', alpha=0.4)
346+
ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
355347
356348
357349
for x0 in range(6):
358350
# Calculate the fraction of time for each state
359-
X_bar = (X == x0).cumsum() / (1 + np.arange(ts_length, dtype=float))
360-
ax.plot(X_bar - ψ_star[x0], label=f'$X = {x0+1} $')
351+
p_hat = (X == x0).cumsum() / (1 + np.arange(ts_length, dtype=float))
352+
ax.plot(p_hat - ψ_star[x0], label=f'$x = {x0+1} $')
361353
ax.set_xlabel('t')
362-
ax.set_ylabel(r'fraction of time spent in a state $- \psi^* (x)$')
354+
ax.set_ylabel(r'$\hat p_t(x) - \psi^* (x)$')
363355
364356
ax.legend()
365357
plt.show()
366358
```
367359

360+
Similar to previous examples, the sample path averages for each state converge to the stationary distribution as the trend converge towards 0.
361+
368362
### Example 4
369363

370364
Let's look at another example with two states: 0 and 1.
@@ -395,8 +389,7 @@ dot.edge("1", "0", label="1.0", color='red')
395389
dot
396390
```
397391

398-
399-
In fact it has a periodic cycle --- the state cycles between the two states in a regular way.
392+
Unlike other Markov chains we have seen before, it has a periodic cycle --- the state cycles between the two states in a regular way.
400393

401394
This is called [periodicity](https://www.randomservices.org/random/markov/Periodicity.html).
402395

@@ -412,19 +405,18 @@ fig, axes = plt.subplots(nrows=1, ncols=n)
412405
ψ_star = mc.stationary_distributions[0]
413406
414407
for i in range(n):
415-
axes[i].grid()
416408
axes[i].set_ylim(0.45, 0.55)
417-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
409+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color='black',
418410
label = fr'$\psi^*({i})$')
419411
axes[i].set_xlabel('t')
420-
axes[i].set_ylabel(f'fraction of time spent at {i}')
412+
axes[i].set_ylabel(fr'$\hat p_t({i})$')
421413
422414
# Compute the fraction of time spent, for each x
423415
for x0 in range(n):
424416
# Generate time series starting at different x_0
425417
X = mc.simulate(ts_length, init=x0)
426-
X_bar = (X == i).cumsum() / (1 + np.arange(ts_length, dtype=float))
427-
axes[i].plot(X_bar, label=f'$x_0 = \, {x0} $')
418+
p_hat = (X == i).cumsum() / (1 + np.arange(ts_length, dtype=float))
419+
axes[i].plot(p_hat, label=f'$x_0 = \, {x0} $')
428420
429421
axes[i].legend()
430422
plt.show()
@@ -436,8 +428,6 @@ The proportion of time spent in a state can converge to the stationary distribut
436428

437429
However, the distribution at each state does not.
438430

439-
+++ {"user_expressions": []}
440-
441431
### Expectations of geometric sums
442432

443433
Sometimes we want to compute the mathematical expectation of a geometric sum, such as
@@ -553,14 +543,14 @@ mc = qe.MarkovChain(P)
553543
fig, ax = plt.subplots(figsize=(9, 6))
554544
X = mc.simulate(ts_length)
555545
ax.set_ylim(-0.25, 0.25)
556-
ax.axhline(0, linestyle='dashed', lw=2, color = 'black', alpha=0.4)
546+
ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
557547
558548
for x0 in range(8):
559549
# Calculate the fraction of time for each worker
560-
X_bar = (X == x0).cumsum() / (1 + np.arange(ts_length, dtype=float))
561-
ax.plot(X_bar - ψ_star[x0], label=f'$X = {x0+1} $')
550+
p_hat = (X == x0).cumsum() / (1 + np.arange(ts_length, dtype=float))
551+
ax.plot(p_hat - ψ_star[x0], label=f'$x = {x0+1} $')
562552
ax.set_xlabel('t')
563-
ax.set_ylabel(r'fraction of time spent in a state $- \psi^* (x)$')
553+
ax.set_ylabel(r'$\hat p_t(x) - \psi^* (x)$')
564554
565555
ax.legend()
566556
plt.show()
@@ -616,7 +606,7 @@ The result should be similar to the plot we plotted [here](ergo)
616606

617607
We will address this exercise graphically.
618608

619-
The plots show the time series of $\bar X_m - p$ for two initial
609+
The plots show the time series of $\bar{\{X=x\}}_m - p$ for two initial
620610
conditions.
621611

622612
As $m$ gets large, both series converge to zero.
@@ -632,8 +622,7 @@ mc = qe.MarkovChain(P)
632622
633623
fig, ax = plt.subplots(figsize=(9, 6))
634624
ax.set_ylim(-0.25, 0.25)
635-
ax.grid()
636-
ax.hlines(0, 0, ts_length, lw=2, alpha=0.6) # Horizonal line at zero
625+
ax.axhline(0, linestyle='dashed', lw=2, color='black', alpha=0.4)
637626
638627
for x0, col in ((0, 'blue'), (1, 'green')):
639628
# Generate time series for worker that starts at x0
@@ -642,10 +631,12 @@ for x0, col in ((0, 'blue'), (1, 'green')):
642631
X_bar = (X == 0).cumsum() / (1 + np.arange(ts_length, dtype=float))
643632
# Plot
644633
ax.fill_between(range(ts_length), np.zeros(ts_length), X_bar - p, color=col, alpha=0.1)
645-
ax.plot(X_bar - p, color=col, label=f'$X_0 = \, {x0} $')
634+
ax.plot(X_bar - p, color=col, label=f'$x_0 = \, {x0} $')
646635
# Overlay in black--make lines clearer
647636
ax.plot(X_bar - p, 'k-', alpha=0.6)
648-
637+
ax.set_xlabel('t')
638+
ax.set_ylabel(r'$\bar X_m - \psi^* (x)$')
639+
649640
ax.legend(loc='upper right')
650641
plt.show()
651642
```

0 commit comments

Comments
 (0)