You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/markov_chains_II.md
+45-52Lines changed: 45 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ jupytext:
4
4
extension: .md
5
5
format_name: myst
6
6
format_version: 0.13
7
-
jupytext_version: 1.14.4
7
+
jupytext_version: 1.14.5
8
8
kernelspec:
9
9
display_name: Python 3 (ipykernel)
10
10
language: python
@@ -37,24 +37,19 @@ to be installed on your computer. Installation instructions for graphviz can be
37
37
[here](https://www.graphviz.org/download/)
38
38
```
39
39
40
-
41
40
## Overview
42
41
43
-
This lecture continues on from our {doc}`earlier lecture on Markov chains
44
-
<markov_chains_I>`.
45
-
42
+
This lecture continues our journey in Markov chains.
46
43
47
-
Specifically, we will introduce the concepts of irreducibility and ergodicity, and see how they connect to stationarity.
44
+
Specifically, we will introduce irreducibility and ergodicity, and how they connect to stationarity.
48
45
49
-
Irreducibility describes the ability of a Markov chain to move between any two states in the system.
46
+
Irreducibility is a concept that describes the ability of a Markov chain to move between any two states in the system.
50
47
51
48
Ergodicity is a sample path property that describes the behavior of the system over long periods of time.
52
49
53
-
As we will see,
50
+
The concepts of irreducibility and ergodicity are closely related to the idea of stationarity.
54
51
55
-
* an irreducible Markov chain guarantees the existence of a unique stationary distribution, while
56
-
* an ergodic Markov chain generates time series that satisfy a version of the
57
-
law of large numbers.
52
+
An irreducible Markov chain guarantees the existence of a unique stationary distribution, while an ergodic Markov chain ensures that the system eventually reaches its stationary distribution, regardless of its initial state.
58
53
59
54
Together, these concepts provide a foundation for understanding the long-term behavior of Markov chains.
60
55
@@ -74,7 +69,9 @@ import matplotlib as mpl
74
69
## Irreducibility
75
70
76
71
77
-
To explain irreducibility, let's take $P$ to be a fixed stochastic matrix.
72
+
Irreducibility is a central concept of Markov chain theory.
73
+
74
+
To explain it, let's take $P$ to be a fixed stochastic matrix.
78
75
79
76
Two states $x$ and $y$ are said to **communicate** with each other if
80
77
there exist positive integers $j$ and $k$ such that
@@ -179,8 +176,6 @@ mc = qe.MarkovChain(P, ('poor', 'middle', 'rich'))
179
176
mc.is_irreducible
180
177
```
181
178
182
-
+++ {"user_expressions": []}
183
-
184
179
It might be clear to you already that irreducibility is going to be important
185
180
in terms of long-run outcomes.
186
181
@@ -270,19 +265,19 @@ In view of our latest (ergodicity) result, it is also the fraction of time that
270
265
271
266
Thus, in the long run, cross-sectional averages for a population and time-series averages for a given person coincide.
272
267
273
-
This is one aspect of the concept of ergodicity.
268
+
This is one aspect of the concept of ergodicity.
274
269
275
270
276
271
(ergo)=
277
272
### Example 2
278
273
279
-
Another example is the Hamilton dynamics we {ref}`discussed before <mc_eg2>`.
274
+
Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed before <mc_eg2>`.
275
+
276
+
The diagram of the Markov chain shows that it is **irreducible**.
280
277
281
-
The {ref}`graph <mc_eg2>`of the Markov chain shows it is irreducible
278
+
Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state
282
279
283
-
Therefore, we can see the sample path averages for each state (the fraction of
284
-
time spent in each state) converges to the stationary distribution regardless of
285
-
the starting state
280
+
Let's denote the fraction of time spent in state $x$ at time $t$ in our sample path as $\hat p_t(x)$ and compare it with the stationary distribution $\psi^* (x)$
286
281
287
282
```{code-cell} ipython3
288
283
P = np.array([[0.971, 0.029, 0.000],
@@ -291,27 +286,28 @@ P = np.array([[0.971, 0.029, 0.000],
0 commit comments