Skip to content

Commit 8a0be8f

Browse files
authored
Merge pull request #67 from QuantEcon/integrate-mc
[markov_chains] Integrate Comments
2 parents d31d69f + 6d118a6 commit 8a0be8f

File tree

2 files changed

+29
-31
lines changed

2 files changed

+29
-31
lines changed

lectures/lln_clt.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -368,6 +368,8 @@ Since the distribution of $\bar X$ follows a standard normal distribution, but t
368368
This violates {eq}`exp`, and thus breaks LLN.
369369

370370
```{note}
371+
:name: iid_violation
372+
371373
Although in this case, the violation of IID breaks LLN, it is not always the case for correlated data.
372374
373375
We will show an example in the [exercise](lln_ex3).

lectures/markov_chains.md

Lines changed: 27 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -255,8 +255,6 @@ Then we can address a range of questions, such as
255255

256256
We'll cover such applications below.
257257

258-
259-
260258
### Defining Markov Chains
261259

262260
So far we've given examples of Markov chains but now let's define them more
@@ -308,9 +306,6 @@ chain $\{X_t\}$ as follows:
308306

309307
By construction, the resulting process satisfies {eq}`mpp`.
310308

311-
312-
313-
314309
## Simulation
315310

316311
```{index} single: Markov Chains; Simulation
@@ -864,10 +859,8 @@ Importantly, the result is valid for any choice of $\psi_0$.
864859

865860
Notice that the theorem is related to the law of large numbers.
866861

867-
TODO -- link to our undergrad lln and clt lecture
868-
869862
It tells us that, in some settings, the law of large numbers sometimes holds even when the
870-
sequence of random variables is not IID.
863+
sequence of random variables is [not IID](iid_violation).
871864

872865

873866
(mc_eg1-2)=
@@ -912,15 +905,15 @@ n_state = P.shape[1]
912905
fig, axes = plt.subplots(nrows=1, ncols=n_state)
913906
ψ_star = mc.stationary_distributions[0]
914907
plt.subplots_adjust(wspace=0.35)
908+
915909
for i in range(n_state):
916910
axes[i].grid()
917-
axes[i].set_ylim(ψ_star[i]-0.2, ψ_star[i]+0.2)
918-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
919-
label = fr'$\psi^*(X={i})$')
911+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
912+
label = fr'$\psi^*({i})$')
920913
axes[i].set_xlabel('t')
921-
axes[i].set_ylabel(fr'average time spent at X={i}')
914+
axes[i].set_ylabel(f'fraction of time spent at {i}')
922915
923-
# Compute the fraction of time spent, for each X=x
916+
# Compute the fraction of time spent, starting from different x_0s
924917
for x0, col in ((0, 'blue'), (1, 'green'), (2, 'red')):
925918
# Generate time series that starts at different x0
926919
X = mc.simulate(n, init=x0)
@@ -949,6 +942,8 @@ $$
949942
The diagram of the Markov chain shows that it is **irreducible**
950943

951944
```{code-cell} ipython3
945+
:tags: [hide-input]
946+
952947
dot = Digraph(comment='Graph')
953948
dot.attr(rankdir='LR')
954949
dot.node("0")
@@ -976,15 +971,16 @@ mc = MarkovChain(P)
976971
n_state = P.shape[1]
977972
fig, axes = plt.subplots(nrows=1, ncols=n_state)
978973
ψ_star = mc.stationary_distributions[0]
974+
979975
for i in range(n_state):
980976
axes[i].grid()
981977
axes[i].set_ylim(0.45, 0.55)
982-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
983-
label = fr'$\psi^*(X={i})$')
978+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
979+
label = fr'$\psi^*({i})$')
984980
axes[i].set_xlabel('t')
985-
axes[i].set_ylabel(fr'average time spent at X={i}')
981+
axes[i].set_ylabel(f'fraction of time spent at {i}')
986982
987-
# Compute the fraction of time spent, for each X=x
983+
# Compute the fraction of time spent, for each x
988984
for x0 in range(n_state):
989985
# Generate time series starting at different x_0
990986
X = mc.simulate(n, init=x0)
@@ -1078,6 +1074,7 @@ In the case of Hamilton's Markov chain, the distribution $\psi P^t$ converges to
10781074
P = np.array([[0.971, 0.029, 0.000],
10791075
[0.145, 0.778, 0.077],
10801076
[0.000, 0.508, 0.492]])
1077+
10811078
# Define the number of iterations
10821079
n = 50
10831080
n_state = P.shape[0]
@@ -1097,8 +1094,8 @@ for i in range(n):
10971094
# Loop through many initial values
10981095
for x0 in x0s:
10991096
x = x0
1100-
X = np.zeros((n,n_state))
1101-
1097+
X = np.zeros((n, n_state))
1098+
11021099
# Obtain and plot distributions at each state
11031100
for t in range(0, n):
11041101
x = x @ P
@@ -1107,10 +1104,10 @@ for x0 in x0s:
11071104
axes[i].plot(range(0, n), X[:,i], alpha=0.3)
11081105
11091106
for i in range(n_state):
1110-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
1111-
label = fr'$\psi^*(X={i})$')
1107+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
1108+
label = fr'$\psi^*({i})$')
11121109
axes[i].set_xlabel('t')
1113-
axes[i].set_ylabel(fr'$\psi(X={i})$')
1110+
axes[i].set_ylabel(fr'$\psi({i})$')
11141111
axes[i].legend()
11151112
11161113
plt.show()
@@ -1147,9 +1144,9 @@ for x0 in x0s:
11471144
axes[i].plot(range(20, n), X[20:,i], alpha=0.3)
11481145
11491146
for i in range(n_state):
1150-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^* (X={i})$')
1147+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^*({i})$')
11511148
axes[i].set_xlabel('t')
1152-
axes[i].set_ylabel(fr'$\psi(X={i})$')
1149+
axes[i].set_ylabel(fr'$\psi({i})$')
11531150
axes[i].legend()
11541151
11551152
plt.show()
@@ -1295,7 +1292,7 @@ In this exercise,
12951292
12961293
1. show this process is asymptotically stationary and calculate the stationary distribution using simulations.
12971294
1298-
1. use simulation to show ergodicity.
1295+
1. use simulations to demonstrate ergodicity of this process.
12991296
13001297
````
13011298

@@ -1323,7 +1320,7 @@ codes_B = ( '1','2','3','4','5','6','7','8')
13231320
np.linalg.matrix_power(P_B, 10)
13241321
```
13251322

1326-
We find rows transition matrix converge to the stationary distribution
1323+
We find that rows of the transition matrix converge to the stationary distribution
13271324

13281325
```{code-cell} ipython3
13291326
mc = qe.MarkovChain(P_B)
@@ -1344,17 +1341,17 @@ ax.axhline(0, linestyle='dashed', lw=2, color = 'black', alpha=0.4)
13441341
13451342
13461343
for x0 in range(8):
1347-
# Calculate the average time for each worker
1344+
# Calculate the fraction of time for each worker
13481345
X_bar = (X == x0).cumsum() / (1 + np.arange(N, dtype=float))
13491346
ax.plot(X_bar - ψ_star[x0], label=f'$X = {x0+1} $')
13501347
ax.set_xlabel('t')
1351-
ax.set_ylabel(fr'average time spent in a state $- \psi^* (X=x)$')
1348+
ax.set_ylabel(r'fraction of time spent in a state $- \psi^* (x)$')
13521349
13531350
ax.legend()
13541351
plt.show()
13551352
```
13561353

1357-
We can see that the time spent at each state quickly converges to the stationary distribution.
1354+
Note that the fraction of time spent at each state quickly converges to the probability assigned to that state by the stationary distribution.
13581355

13591356
```{solution-end}
13601357
```
@@ -1452,10 +1449,9 @@ However, another way to verify irreducibility is by checking whether $A$ satisfi
14521449
14531450
Assume A is an $n \times n$ $A$ is irreducible if and only if $\sum_{k=0}^{n-1}A^k$ is a positive matrix.
14541451
1455-
(see more at \cite{zhao_power_2012} and [here](https://math.stackexchange.com/questions/3336616/how-to-prove-this-matrix-is-a-irreducible-matrix))
1452+
(see more: {cite}`zhao_power_2012` and [here](https://math.stackexchange.com/questions/3336616/how-to-prove-this-matrix-is-a-irreducible-matrix))
14561453
14571454
Based on this claim, write a function to test irreducibility.
1458-
14591455
```
14601456

14611457
```{solution-start} mc_ex3

0 commit comments

Comments
 (0)