Skip to content

Commit e497f89

Browse files
committed
fix small errors
1 parent 2aab02d commit e497f89

File tree

2 files changed

+18
-25
lines changed

2 files changed

+18
-25
lines changed

lectures/markov_chains_I.md

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -487,8 +487,6 @@ always close to 0.25 (for the `P` matrix above).
487487
Here's an illustration using the same $P$ as the preceding example
488488

489489
```{code-cell} ipython3
490-
from quantecon import MarkovChain
491-
492490
mc = qe.MarkovChain(P)
493491
X = mc.simulate(ts_length=1_000_000)
494492
np.mean(X == 0)
@@ -750,7 +748,7 @@ This gives some intuition for the following theorem.
750748

751749

752750
```{prf:theorem}
753-
:label: mc_conv_thm
751+
:label: mc_po_conv_thm
754752
755753
If $P$ is everywhere positive, then $P$ has exactly one stationary
756754
distribution.
@@ -801,9 +799,8 @@ Sometimes the distribution $\psi_t = \psi_0 P^t$ of $X_t$ converges to $\psi^*$
801799

802800
For example, we have the following result
803801

802+
(strict_stationary)=
804803
```{prf:theorem}
805-
:label: strict_stationary
806-
807804
Theorem: If there exists an integer $m$ such that all entries of $P^m$ are
808805
strictly positive, with unique stationary distribution $\psi^*$, and
809806
@@ -1170,7 +1167,7 @@ plot_distribution(P, ts_length, num_distributions)
11701167
````{exercise}
11711168
:label: mc1_ex_2
11721169
1173-
We discussed the six-state transition matrix estimated by Imam & Temple {cite}`imam2023political` [before](mc_eg3).
1170+
We discussed the six-state transition matrix estimated by Imam & Temple {cite}`imampolitical` [before](mc_eg3).
11741171
11751172
```python
11761173
nodes = ['DG', 'DC', 'NG', 'NC', 'AG', 'AC']

lectures/markov_chains_II.md

Lines changed: 15 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -27,23 +27,19 @@ In addition to what's in Anaconda, this lecture will need the following librarie
2727

2828
## Overview
2929

30-
Markov chains are a standard way to model time series with some dependence
31-
between observations.
30+
This lecture continues the journey in Markov chain.
3231

33-
For example,
32+
Specifically, we will introduce irreducibility and ergodicity, and how they connect to stionarity.
3433

35-
* inflation next year depends on inflation this year
36-
* unemployment next month depends on unemployment this month
34+
Irreducibility is a concept that describes the ability of a Markov chain to move between any two states in the system.
3735

38-
Markov chains are one of the workhorse models of economics and finance.
36+
Ergodicity is a sample path property that describes the behavior of the system over long periods of time.
3937

40-
The theory of Markov chains is beautiful and provides many insights into
41-
probability and dynamics.
38+
The concepts of irreducibility and ergodicity are closely related to the idea of stationarity.
4239

43-
In this introductory lecture, we will
40+
An irreducible Markov chain guarantees the existence of a unique stationary distribution, while an ergodic Markov chain ensures that the system eventually reaches its stationary distribution, regardless of its initial state.
4441

45-
* review some of the key ideas from the theory of Markov chains and
46-
* show how Markov chains appear in some economic applications.
42+
Together, these concepts provide a foundation for understanding the long-term behavior of Markov chains.
4743

4844
Let's start with some standard imports:
4945

@@ -344,7 +340,7 @@ It is still irreducible, however, so ergodicity holds.
344340
P = np.array([[0, 1],
345341
[1, 0]])
346342
ts_length = 10_000
347-
mc = MarkovChain(P)
343+
mc = qe.MarkovChain(P)
348344
n = len(P)
349345
fig, axes = plt.subplots(nrows=1, ncols=n)
350346
ψ_star = mc.stationary_distributions[0]
@@ -406,18 +402,18 @@ $$
406402
407403
Benhabib el al. {cite}`benhabib_wealth_2019` estimated that the transition matrix for social mobility as the following
408404
409-
$$P_B:=\left(\begin{array}{cccccccc}0.222 & 0.222 & 0.215 & 0.187 & 0.081 & 0.038 & 0.029 & 0.006 \\ 0.221 & 0.22 & 0.215 & 0.188 & 0.082 & 0.039 & 0.029 & 0.006 \\ 0.207 & 0.209 & 0.21 & 0.194 & 0.09 & 0.046 & 0.036 & 0.008 \\ 0.198 & 0.201 & 0.207 & 0.198 & 0.095 & 0.052 & 0.04 & 0.009 \\ 0.175 & 0.178 & 0.197 & 0.207 & 0.11 & 0.067 & 0.054 & 0.012 \\ 0.182 & 0.184 & 0.2 & 0.205 & 0.106 & 0.062 & 0.05 & 0.011 \\ 0.123 & 0.125 & 0.166 & 0.216 & 0.141 & 0.114 & 0.094 & 0.021 \\ 0.084 & 0.084 & 0.142 & 0.228 & 0.17 & 0.143 & 0.121 & 0.028\end{array}\right)$$
405+
$$P:=\left(\begin{array}{cccccccc}0.222 & 0.222 & 0.215 & 0.187 & 0.081 & 0.038 & 0.029 & 0.006 \\ 0.221 & 0.22 & 0.215 & 0.188 & 0.082 & 0.039 & 0.029 & 0.006 \\ 0.207 & 0.209 & 0.21 & 0.194 & 0.09 & 0.046 & 0.036 & 0.008 \\ 0.198 & 0.201 & 0.207 & 0.198 & 0.095 & 0.052 & 0.04 & 0.009 \\ 0.175 & 0.178 & 0.197 & 0.207 & 0.11 & 0.067 & 0.054 & 0.012 \\ 0.182 & 0.184 & 0.2 & 0.205 & 0.106 & 0.062 & 0.05 & 0.011 \\ 0.123 & 0.125 & 0.166 & 0.216 & 0.141 & 0.114 & 0.094 & 0.021 \\ 0.084 & 0.084 & 0.142 & 0.228 & 0.17 & 0.143 & 0.121 & 0.028\end{array}\right)$$
410406
411407
where each state 1 to 8 corresponds to a percentile of wealth shares
412408
413409
$$
414410
0-20 \%, 20-40 \%, 40-60 \%, 60-80 \%, 80-90 \%, 90-95 \%, 95-99 \%, 99-100 \%
415411
$$
416412
417-
The matrix is recorded as `P_B` below
413+
The matrix is recorded as `P` below
418414
419415
```python
420-
P_B = [
416+
P = [
421417
[0.222, 0.222, 0.215, 0.187, 0.081, 0.038, 0.029, 0.006],
422418
[0.221, 0.22, 0.215, 0.188, 0.082, 0.039, 0.029, 0.006],
423419
[0.207, 0.209, 0.21, 0.194, 0.09, 0.046, 0.036, 0.008],
@@ -428,7 +424,7 @@ P_B = [
428424
[0.084, 0.084, 0.142, 0.228, 0.17, 0.143, 0.121, 0.028]
429425
]
430426
431-
P_B = np.array(P_B)
427+
P = np.array(P)
432428
codes_B = ('1','2','3','4','5','6','7','8')
433429
```
434430
@@ -462,7 +458,7 @@ P = [
462458
P = np.array(P)
463459
codes_B = ('1','2','3','4','5','6','7','8')
464460
465-
np.linalg.matrix_power(P_B, 10)
461+
np.linalg.matrix_power(P, 10)
466462
```
467463

468464
We find again that rows of the transition matrix converge to the stationary distribution
@@ -477,7 +473,7 @@ mc = qe.MarkovChain(P)
477473

478474
```{code-cell} ipython3
479475
ts_length = 1000
480-
mc = MarkovChain(P)
476+
mc = qe.MarkovChain(P)
481477
fig, ax = plt.subplots(figsize=(9, 6))
482478
X = mc.simulate(ts_length)
483479
# Center the plot at 0
@@ -560,7 +556,7 @@ p = β / (α + β)
560556
561557
P = ((1 - α, α), # Careful: P and p are distinct
562558
( β, 1 - β))
563-
mc = MarkovChain(P)
559+
mc = qe.MarkovChain(P)
564560
565561
fig, ax = plt.subplots(figsize=(9, 6))
566562
ax.set_ylim(-0.25, 0.25)

0 commit comments

Comments
 (0)