Skip to content

Commit 63c3553

Browse files
committed
update eigenvalue in mc and add example to mc2
1 parent e497f89 commit 63c3553

File tree

2 files changed

+109
-34
lines changed

2 files changed

+109
-34
lines changed

lectures/markov_chains_I.md

Lines changed: 44 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ kernelspec:
1313

1414
+++ {"user_expressions": []}
1515

16-
# Markov Chains I
16+
# Markov Chains: Basic Concepts and Stationarity
1717

1818
In addition to what's in Anaconda, this lecture will need the following libraries:
1919

@@ -270,12 +270,12 @@ $$
270270

271271
```{code-cell} ipython3
272272
nodes = ['DG', 'DC', 'NG', 'NC', 'AG', 'AC']
273-
trans_matrix = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
274-
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
275-
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
276-
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
277-
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
278-
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
273+
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
274+
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
275+
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
276+
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
277+
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
278+
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
279279
```
280280

281281
```{code-cell} ipython3
@@ -285,7 +285,7 @@ label_dict = {}
285285
286286
for start_idx, node_start in enumerate(nodes):
287287
for end_idx, node_end in enumerate(nodes):
288-
value = trans_matrix[start_idx][end_idx]
288+
value = P[start_idx][end_idx]
289289
if value != 0:
290290
G.add_edge(node_start,node_end, weight=value, len=100)
291291
@@ -815,6 +815,7 @@ See, for example, {cite}`sargent2023economic` Chapter 4.
815815

816816
+++ {"user_expressions": []}
817817

818+
(hamilton)=
818819
#### Example: Hamilton's Chain
819820

820821
Hamilton's chain satisfies the conditions of the theorem because $P^2$ is everywhere positive:
@@ -823,7 +824,6 @@ Hamilton's chain satisfies the conditions of the theorem because $P^2$ is everyw
823824
P = np.array([[0.971, 0.029, 0.000],
824825
[0.145, 0.778, 0.077],
825826
[0.000, 0.508, 0.492]])
826-
P = np.array(P)
827827
P @ P
828828
```
829829

@@ -1099,8 +1099,7 @@ Compare your solution to the paper.
10991099
:class: dropdown
11001100
```
11011101

1102-
1.
1103-
1102+
1.
11041103

11051104
```{code-cell} ipython3
11061105
:tags: [hide-output]
@@ -1155,12 +1154,43 @@ mc = qe.MarkovChain(P)
11551154

11561155
3.
11571156

1157+
We find the distribution $\psi$ converges to the stationary distribution more quickly compared to the {ref}`hamilton's chain <hamilton>`.
1158+
11581159
```{code-cell} ipython3
11591160
ts_length = 10
11601161
num_distributions = 25
11611162
plot_distribution(P, ts_length, num_distributions)
11621163
```
11631164

1165+
In fact, the rate of convergence is governed by {ref}`eigenvalues<eigen>` {cite}`sargent2023economic`.
1166+
1167+
```{code-cell} ipython3
1168+
P_eigenvals = np.linalg.eigvals(P)
1169+
P_eigenvals
1170+
```
1171+
1172+
```{code-cell} ipython3
1173+
P_hamilton = np.array([[0.971, 0.029, 0.000],
1174+
[0.145, 0.778, 0.077],
1175+
[0.000, 0.508, 0.492]])
1176+
1177+
hamilton_eigenvals = np.linalg.eigvals(P_hamilton)
1178+
hamilton_eigenvals
1179+
```
1180+
1181+
More specifically, it is governed by the spectral gap, the difference between the largest and the second largest eigenvalue.
1182+
1183+
```{code-cell} ipython3
1184+
sp_gap_P = P_eigenvals[0] - np.diff(P_eigenvals)[0]
1185+
sp_gap_hamilton = hamilton_eigenvals[0] - np.diff(hamilton_eigenvals)[0]
1186+
1187+
sp_gap_P > sp_gap_hamilton
1188+
```
1189+
1190+
We will come back to this in
1191+
1192+
TODO: add a reference to eigen II
1193+
11641194
```{solution-end}
11651195
```
11661196

@@ -1192,7 +1222,7 @@ In this exercise,
11921222

11931223
1.
11941224

1195-
Although $P$ is not every positive, $P^m$ when $m=3$ is everywhere positive.
1225+
Although $P$ is not every positive, $P^m$ when $m=3$ is everywhere positive.
11961226

11971227
```{code-cell} ipython3
11981228
P = np.array([[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
@@ -1209,18 +1239,12 @@ So it satisfies the requirement.
12091239

12101240
2.
12111241

1212-
We can see the distribution $\psi$ converges to the stationary distribution quickly regardless of the initial distributions
1242+
We find the distribution $\psi$ converges to the stationary distribution quickly regardless of the initial distributions
12131243

12141244
```{code-cell} ipython3
12151245
ts_length = 30
12161246
num_distributions = 20
12171247
nodes = ['DG', 'DC', 'NG', 'NC', 'AG', 'AC']
1218-
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
1219-
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
1220-
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
1221-
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
1222-
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
1223-
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
12241248
12251249
# Get parameters of transition matrix
12261250
n = len(P)
@@ -1277,4 +1301,4 @@ Therefore $P^{k+1} \mathbf 1 = P P^k \mathbf 1 = P \mathbf 1 = \mathbf 1$
12771301
The proof is done.
12781302

12791303
```{solution-end}
1280-
```
1304+
```

lectures/markov_chains_II.md

Lines changed: 65 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ kernelspec:
1313

1414
+++ {"user_expressions": []}
1515

16-
# Markov Chains II
16+
# Markov Chains: Irreducibility and Ergodicity
1717

1818
In addition to what's in Anaconda, this lecture will need the following libraries:
1919

@@ -262,7 +262,7 @@ This is one aspect of the concept of ergodicity.
262262
### Example 2
263263

264264

265-
Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed above <mc_eg2>`.
265+
Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed before <mc_eg2>`.
266266

267267
The diagram of the Markov chain shows that it is **irreducible**.
268268

@@ -298,6 +298,61 @@ plt.show()
298298

299299
### Example 3
300300

301+
Let's look at one more example with six states {ref}`discussed before <mc_eg3>`.
302+
303+
304+
$$
305+
$$
306+
P :=
307+
\left(
308+
\begin{array}{cccccc}
309+
0.86 & 0.11 & 0.03 & 0.00 & 0.00 & 0.00 \\
310+
0.52 & 0.33 & 0.13 & 0.02 & 0.00 & 0.00 \\
311+
0.12 & 0.03 & 0.70 & 0.11 & 0.03 & 0.01 \\
312+
0.13 & 0.02 & 0.35 & 0.36 & 0.10 & 0.04 \\
313+
0.00 & 0.00 & 0.09 & 0.11 & 0.55 & 0.25 \\
314+
0.00 & 0.00 & 0.09 & 0.15 & 0.26 & 0.50
315+
\end{array}
316+
\right)
317+
$$
318+
$$
319+
320+
321+
The graph for the chain shows states are densely connected indicating that it is **irreducible**.
322+
323+
Similar to previous examples, the sample path averages for each state converges to the stationary distribution
324+
325+
```{code-cell} ipython3
326+
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
327+
[0.52, 0.33, 0.13, 0.02, 0.00, 0.00],
328+
[0.12, 0.03, 0.70, 0.11, 0.03, 0.01],
329+
[0.13, 0.02, 0.35, 0.36, 0.10, 0.04],
330+
[0.00, 0.00, 0.09, 0.11, 0.55, 0.25],
331+
[0.00, 0.00, 0.09, 0.15, 0.26, 0.50]]
332+
333+
ts_length = 10_000
334+
mc = qe.MarkovChain(P)
335+
ψ_star = mc.stationary_distributions[0]
336+
fig, ax = plt.subplots(figsize=(9, 6))
337+
X = mc.simulate(ts_length)
338+
# Center the plot at 0
339+
ax.set_ylim(-0.25, 0.25)
340+
ax.axhline(0, linestyle='dashed', lw=2, color = 'black', alpha=0.4)
341+
342+
343+
for x0 in range(6):
344+
# Calculate the fraction of time for each state
345+
X_bar = (X == x0).cumsum() / (1 + np.arange(ts_length, dtype=float))
346+
ax.plot(X_bar - ψ_star[x0], label=f'$X = {x0+1} $')
347+
ax.set_xlabel('t')
348+
ax.set_ylabel(r'fraction of time spent in a state $- \psi^* (x)$')
349+
350+
ax.legend()
351+
plt.show()
352+
```
353+
354+
### Example 4
355+
301356
Let's look at another example with two states: 0 and 1.
302357

303358

@@ -444,16 +499,14 @@ In this exercise,
444499
Use the technique we learnt before, we can take the power of the transition matrix
445500

446501
```{code-cell} ipython3
447-
P = [
448-
[0.222, 0.222, 0.215, 0.187, 0.081, 0.038, 0.029, 0.006],
449-
[0.221, 0.22, 0.215, 0.188, 0.082, 0.039, 0.029, 0.006],
450-
[0.207, 0.209, 0.21, 0.194, 0.09, 0.046, 0.036, 0.008],
451-
[0.198, 0.201, 0.207, 0.198, 0.095, 0.052, 0.04, 0.009],
452-
[0.175, 0.178, 0.197, 0.207, 0.11, 0.067, 0.054, 0.012],
453-
[0.182, 0.184, 0.2, 0.205, 0.106, 0.062, 0.05, 0.011],
454-
[0.123, 0.125, 0.166, 0.216, 0.141, 0.114, 0.094, 0.021],
455-
[0.084, 0.084, 0.142, 0.228, 0.17, 0.143, 0.121, 0.028]
456-
]
502+
P = [[0.222, 0.222, 0.215, 0.187, 0.081, 0.038, 0.029, 0.006],
503+
[0.221, 0.22, 0.215, 0.188, 0.082, 0.039, 0.029, 0.006],
504+
[0.207, 0.209, 0.21, 0.194, 0.09, 0.046, 0.036, 0.008],
505+
[0.198, 0.201, 0.207, 0.198, 0.095, 0.052, 0.04, 0.009],
506+
[0.175, 0.178, 0.197, 0.207, 0.11, 0.067, 0.054, 0.012],
507+
[0.182, 0.184, 0.2, 0.205, 0.106, 0.062, 0.05, 0.011],
508+
[0.123, 0.125, 0.166, 0.216, 0.141, 0.114, 0.094, 0.021],
509+
[0.084, 0.084, 0.142, 0.228, 0.17, 0.143, 0.121, 0.028]]
457510
458511
P = np.array(P)
459512
codes_B = ('1','2','3','4','5','6','7','8')
@@ -476,11 +529,9 @@ ts_length = 1000
476529
mc = qe.MarkovChain(P)
477530
fig, ax = plt.subplots(figsize=(9, 6))
478531
X = mc.simulate(ts_length)
479-
# Center the plot at 0
480532
ax.set_ylim(-0.25, 0.25)
481533
ax.axhline(0, linestyle='dashed', lw=2, color = 'black', alpha=0.4)
482534
483-
484535
for x0 in range(8):
485536
# Calculate the fraction of time for each worker
486537
X_bar = (X == x0).cumsum() / (1 + np.arange(ts_length, dtype=float))

0 commit comments

Comments
 (0)