Skip to content

Commit 2aab02d

Browse files
committed
update markov chain
1 parent 463b5e1 commit 2aab02d

File tree

3 files changed

+173
-74
lines changed

3 files changed

+173
-74
lines changed

lectures/_static/quant-econ.bib

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2563,9 +2563,9 @@ @article{hog_cycle
25632563
}
25642564

25652565

2566-
@article{imam2023political,
2566+
@article{imampolitical,
25672567
title={Political Institutions and Output Collapses},
25682568
author={Imam, Patrick and Temple, Jonathan RW},
25692569
year={2023},
2570-
publisher={IMF Working Paper}
2570+
journal={IMF Working Paper}
25712571
}

lectures/markov_chains_I.md

Lines changed: 170 additions & 71 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,7 @@ from graphviz import Digraph
5656
import networkx as nx
5757
from matplotlib import cm
5858
import matplotlib as mpl
59+
from itertools import cycle
5960
```
6061

6162
+++ {"user_expressions": []}
@@ -82,7 +83,7 @@ In other words,
8283

8384
If $P$ is a stochastic matrix, then so is the $k$-th power $P^k$ for all $k \in \mathbb N$.
8485

85-
Checking this is {ref}`one of the exercises <mc_ex_pk>` below.
86+
Checking this is {ref}`one of the exercises <mc1_ex_3>` below.
8687

8788

8889
### Markov Chains
@@ -240,7 +241,71 @@ Then we can address a range of questions, such as
240241

241242
We'll cover some of these applications below.
242243

244+
(mc_eg3)=
245+
#### Example 3
243246

247+
Imam and Temple {cite}`imampolitical` categorize political institutions into three types: democracy (D), autocracy (A), and an intermediate state called anocracy (N).
248+
249+
Each institution can have two potential development regimes: collapse (C) and growth (G). This results in six possible states: DG, DC, NG, NC, AG, and AC.
250+
251+
The lower probability of transitioning from NC to itself indicates that collapses in anocracies quickly evolve into changes in the political institution.
252+
253+
Democracies tend to have longer-lasting growth regimes compared to autocracies as indicated by the lower probability of transitioning from growth to growth in autocracies.
254+
255+
We can also find a higher probability from collapse to growth in democratic regimes
256+
257+
$$
258+
P :=
259+
\left(
260+
\begin{array}{cccccc}
261+
0.86 & 0.11 & 0.03 & 0.00 & 0.00 & 0.00 \\
262+
0.52 & 0.33 & 0.13 & 0.02 & 0.00 & 0.00 \\
263+
0.12 & 0.03 & 0.70 & 0.11 & 0.03 & 0.01 \\
264+
0.13 & 0.02 & 0.35 & 0.36 & 0.10 & 0.04 \\
265+
0.00 & 0.00 & 0.09 & 0.11 & 0.55 & 0.25 \\
266+
0.00 & 0.00 & 0.09 & 0.15 & 0.26 & 0.50
267+
\end{array}
268+
\right)
269+
$$
270+
271+
```{code-cell} ipython3
272+
nodes = ['DG', 'DC', 'NG', 'NC', 'AG', 'AC']
273+
trans_matrix = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
274+
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
275+
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
276+
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
277+
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
278+
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
279+
```
280+
281+
```{code-cell} ipython3
282+
G = nx.MultiDiGraph()
283+
edge_ls = []
284+
label_dict = {}
285+
286+
for start_idx, node_start in enumerate(nodes):
287+
for end_idx, node_end in enumerate(nodes):
288+
value = trans_matrix[start_idx][end_idx]
289+
if value != 0:
290+
G.add_edge(node_start,node_end, weight=value, len=100)
291+
292+
pos = nx.spring_layout(G, seed=10)
293+
fig, ax = plt.subplots()
294+
nx.draw_networkx_nodes(G, pos, node_size=600, edgecolors='black', node_color='white')
295+
nx.draw_networkx_labels(G, pos)
296+
297+
arc_rad = 0.2
298+
curved_edges = [edge for edge in G.edges()]
299+
edges = nx.draw_networkx_edges(G, pos, ax=ax, connectionstyle=f'arc3, rad = {arc_rad}', edge_cmap=cm.Blues, width=2,
300+
edge_color=[G[nodes[0]][nodes[1]][0]['weight'] for nodes in G.edges])
301+
302+
pc = mpl.collections.PatchCollection(edges, cmap=cm.Blues)
303+
304+
ax = plt.gca()
305+
ax.set_axis_off()
306+
plt.colorbar(pc, ax=ax)
307+
plt.show()
308+
```
244309

245310
### Defining Markov Chains
246311

@@ -826,8 +891,7 @@ We can show this in a slightly different way by focusing on the probability that
826891
First, we write a function to draw initial distributions $\psi_0$ of size `num_distributions`
827892

828893
```{code-cell} ipython3
829-
def generate_initial_values(num_distributions, n):
830-
894+
def generate_initial_values(num_distributions):
831895
n = len(P)
832896
ψ_0s = np.empty((num_distributions, n))
833897
@@ -854,7 +918,7 @@ def plot_distribution(P, ts_length, num_distributions):
854918
fig, axes = plt.subplots(nrows=1, ncols=n)
855919
plt.subplots_adjust(wspace=0.35)
856920
857-
ψ_0s = generate_initial_values(num_distributions, n)
921+
ψ_0s = generate_initial_values(num_distributions)
858922
859923
# Get the path for each starting value
860924
for ψ_0 in ψ_0s:
@@ -890,7 +954,6 @@ P = np.array([[0.971, 0.029, 0.000],
890954
plot_distribution(P, ts_length, num_distributions)
891955
```
892956

893-
894957
The convergence to $\psi^*$ holds for different initial distributions.
895958

896959
+++ {"user_expressions": []}
@@ -1010,16 +1073,38 @@ The vector $P^k h$ stores the conditional expectation $\mathbb E [ h(X_{t + k})
10101073

10111074
```{exercise}
10121075
:label: mc1_ex_1
1013-
```
10141076
1015-
Imam, P., & Temple, J. R. {cite}`imam2023political` used a three-state transition matrix to describe the transition of three states of a regime: growth, stagnation, and collapse
1077+
Imam and Temple {cite}`imampolitical` used a three-state transition matrix to describe the transition of three states of a regime: growth, stagnation, and collapse
10161078
1017-
```{code-cell} ipython3
1018-
P = [[0.68, 0.12, 0.20],
1019-
[0.50, 0.24, 0.26],
1020-
[0.36, 0.18, 0.46]]
1079+
$$
1080+
P :=
1081+
\left(
1082+
\begin{array}{ccc}
1083+
0.68 & 0.12 & 0.20 \\
1084+
0.50 & 0.24 & 0.26 \\
1085+
0.36 & 0.18 & 0.46
1086+
\end{array}
1087+
\right)
1088+
$$
1089+
1090+
where rows, from top to down, correspondes to growth, stagnation and collapse.
1091+
1092+
In this exercise,
1093+
1094+
1. visualize the transition matrix and show this process is asymptotically stationary
1095+
1. calculate the stationary distribution using simulations
1096+
1. visualize the dynamics of $(\psi_0 P^t)(i)$ where $t \in 0, ..., 25$ and compare the convergent path with the previous transition matrix
1097+
1098+
Compare your solution to the paper.
1099+
```
1100+
1101+
```{solution-start} mc1_ex_1
1102+
:class: dropdown
10211103
```
10221104

1105+
1.
1106+
1107+
10231108
```{code-cell} ipython3
10241109
:tags: [hide-output]
10251110
@@ -1044,24 +1129,8 @@ dot.edge("Collapse", "Growth", label="0.36")
10441129
dot
10451130
```
10461131

1047-
In this exercise,
1048-
1049-
1. show this process is asymptotically stationary
1050-
1. calculate the stationary distribution using simulations
1051-
1. visualize the dynamics of $(\psi_0 P^t)(i)$ where $t \in 0, ..., 25$ and compare the convergent path with the previous transition matrix
1052-
1053-
Compare your solution to the paper.
1054-
```
1055-
1056-
```{solution-start} mc1_ex_1
1057-
:class: dropdown
1058-
```
1059-
1060-
1.
1061-
10621132
Since the matrix is everywhere positive, there is a unique stationary distribution.
10631133

1064-
10651134
2.
10661135

10671136
One simple way to calculate the stationary distribution is to take the power of the transition matrix as we have shown before
@@ -1090,67 +1159,97 @@ mc = qe.MarkovChain(P)
10901159
3.
10911160

10921161
```{code-cell} ipython3
1093-
ts_length = 25
1162+
ts_length = 10
10941163
num_distributions = 25
10951164
plot_distribution(P, ts_length, num_distributions)
10961165
```
10971166

1167+
```{solution-end}
10981168
```
10991169

1170+
````{exercise}
1171+
:label: mc1_ex_2
11001172
1101-
$$
1102-
P :=
1103-
\left(
1104-
\begin{array}{cccccc}
1105-
0.72 & 0.11 & 0.11 & 0.05 & 0.00 & 0.01 \\
1106-
0.53 & 0.26 & 0.08 & 0.06 & 0.00 & 0.02 \\
1107-
0.42 & 0.21 & 0.25 & 0.06 & 0.00 & 0.06 \\
1108-
0.05 & 0.00 & 0.00 & 0.63 & 0.10 & 0.22 \\
1109-
0.03 & 0.03 & 0.00 & 0.42 & 0.21 & 0.31 \\
1110-
0.05 & 0.01 & 0.01 & 0.26 & 0.14 & 0.53
1111-
\end{array}
1112-
\right)
1113-
$$
1173+
We discussed the six-state transition matrix estimated by Imam & Temple {cite}`imam2023political` [before](mc_eg3).
11141174
1115-
```{code-cell} ipython3
1116-
nodes = ['DG', 'DS', 'DC', 'AG', 'AS', 'AC']
1117-
trans_matrix = [[0.72, 0.11, 0.11, 0.05, 0.00, 0.01],
1118-
[0.53, 0.26, 0.08, 0.06, 0.00, 0.02],
1119-
[0.42, 0.21, 0.25, 0.06, 0.00, 0.06],
1120-
[0.05, 0.00, 0.00, 0.63, 0.10, 0.22],
1121-
[0.03, 0.03, 0.00, 0.42, 0.21, 0.31],
1122-
[0.05, 0.01, 0.01, 0.26, 0.14, 0.53]]
1175+
```python
1176+
nodes = ['DG', 'DC', 'NG', 'NC', 'AG', 'AC']
1177+
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
1178+
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
1179+
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
1180+
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
1181+
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
1182+
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
1183+
```
1184+
1185+
In this exercise,
1186+
1187+
1. show this process is asymptotically stationary without simulation
1188+
1. simulate and visualize the dynamics starting with a uniform distribution across states (each state will have a probability of 1/6)
1189+
1. change the initial distribution to P(DG) = 1, while all other states have a probability of 0
1190+
````
1191+
1192+
```{solution-start} mc1_ex_2
1193+
:class: dropdown
11231194
```
11241195

1196+
1.
1197+
1198+
Although $P$ is not every positive, $P^m$ when $m=3$ is everywhere positive.
1199+
11251200
```{code-cell} ipython3
1126-
G = nx.MultiDiGraph()
1127-
edge_ls = []
1128-
label_dict = {}
1201+
P = np.array([[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
1202+
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
1203+
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
1204+
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
1205+
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
1206+
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]])
1207+
1208+
np.linalg.matrix_power(P,3)
1209+
```
11291210

1130-
for start_idx, node_start in enumerate(nodes):
1131-
for end_idx, node_end in enumerate(nodes):
1132-
value = trans_matrix[start_idx][end_idx]
1133-
if value != 0:
1134-
G.add_edge(node_start,node_end, weight=value, len=100)
1135-
1136-
pos = nx.spring_layout(G, seed=10)
1137-
fig, ax = plt.subplots()
1138-
nx.draw_networkx_nodes(G, pos, node_size=600, edgecolors='black', node_color='white')
1139-
nx.draw_networkx_labels(G, pos)
1211+
So it satisfies the requirement.
11401212

1141-
arc_rad = 0.2
1142-
curved_edges = [edge for edge in G.edges() if (edge[1], edge[0]) in G.edges()]
1143-
edges = nx.draw_networkx_edges(G, pos, ax=ax, connectionstyle=f'arc3, rad = {arc_rad}', edge_cmap=cm.Blues, width=2,
1144-
edge_color=[G[nodes[0]][nodes[1]][0]['weight'] for nodes in G.edges])
1213+
2.
11451214

1146-
pc = mpl.collections.PatchCollection(edges, cmap=cm.Blues)
1215+
We can see the distribution $\psi$ converges to the stationary distribution quickly regardless of the initial distributions
1216+
1217+
```{code-cell} ipython3
1218+
ts_length = 30
1219+
num_distributions = 20
1220+
nodes = ['DG', 'DC', 'NG', 'NC', 'AG', 'AC']
1221+
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
1222+
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
1223+
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
1224+
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
1225+
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
1226+
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
1227+
1228+
# Get parameters of transition matrix
1229+
n = len(P)
1230+
mc = qe.MarkovChain(P)
1231+
ψ_star = mc.stationary_distributions[0]
1232+
ψ_0 = np.array([[1/6 for i in range(6)],
1233+
[0 if i != 0 else 1 for i in range(6)]])
1234+
## Draw the plot
1235+
fig, axes = plt.subplots(ncols=2)
1236+
plt.subplots_adjust(wspace=0.35)
1237+
for idx in range(2):
1238+
ψ_t = iterate_ψ(ψ_0[idx], P, ts_length)
1239+
for i in range(n):
1240+
axes[idx].plot(ψ_t[:, i] - ψ_star[i], alpha=0.5, label=fr'$\psi_t({i+1})$')
1241+
axes[idx].set_ylim([-0.3, 0.3])
1242+
axes[idx].set_xlabel('t')
1243+
axes[idx].set_ylabel(fr'$\psi_t$')
1244+
axes[idx].legend()
1245+
axes[idx].axhline(0, linestyle='dashed', lw=1, color = 'black')
11471246
1148-
ax = plt.gca()
1149-
ax.set_axis_off()
1150-
plt.colorbar(pc, ax=ax)
11511247
plt.show()
11521248
```
11531249

1250+
```{solution-end}
1251+
```
1252+
11541253
```{exercise}
11551254
:label: mc1_ex_3
11561255
Prove the following: If $P$ is a stochastic matrix, then so is the $k$-th
@@ -1181,4 +1280,4 @@ Therefore $P^{k+1} \mathbf 1 = P P^k \mathbf 1 = P \mathbf 1 = \mathbf 1$
11811280
The proof is done.
11821281

11831282
```{solution-end}
1184-
```
1283+
```

lectures/markov_chains_II.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ We discussed uniqueness of the stationary in the {ref}`previous lecture <station
185185

186186
In fact irreducibility is enough for the uniqueness of the stationary distribution to hold if the distribution exists.
187187

188-
We can revise the [theorem](strict_stationary) into the following fundamental theorem:
188+
We can revise the {ref}`theorem<strict_stationary>` into the following fundamental theorem:
189189

190190
```{prf:theorem}
191191
:label: mc_conv_thm

0 commit comments

Comments
 (0)