Skip to content

Commit 42908f5

Browse files
committed
revert unwanted change in mc_II
1 parent e1f0627 commit 42908f5

File tree

1 file changed

+26
-16
lines changed

1 file changed

+26
-16
lines changed

lectures/markov_chains_II.md

Lines changed: 26 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ jupytext:
44
extension: .md
55
format_name: myst
66
format_version: 0.13
7-
jupytext_version: 1.14.5
7+
jupytext_version: 1.14.4
88
kernelspec:
99
display_name: Python 3 (ipykernel)
1010
language: python
@@ -37,19 +37,24 @@ to be installed on your computer. Installation instructions for graphviz can be
3737
[here](https://www.graphviz.org/download/)
3838
```
3939

40+
4041
## Overview
4142

42-
This lecture continues our journey in Markov chains.
43+
This lecture continues on from our {doc}`earlier lecture on Markov chains
44+
<markov_chains_I>`.
45+
4346

44-
Specifically, we will introduce irreducibility and ergodicity, and how they connect to stationarity.
47+
Specifically, we will introduce the concepts of irreducibility and ergodicity, and see how they connect to stationarity.
4548

46-
Irreducibility is a concept that describes the ability of a Markov chain to move between any two states in the system.
49+
Irreducibility describes the ability of a Markov chain to move between any two states in the system.
4750

4851
Ergodicity is a sample path property that describes the behavior of the system over long periods of time.
4952

50-
The concepts of irreducibility and ergodicity are closely related to the idea of stationarity.
53+
As we will see,
5154

52-
An irreducible Markov chain guarantees the existence of a unique stationary distribution, while an ergodic Markov chain ensures that the system eventually reaches its stationary distribution, regardless of its initial state.
55+
* an irreducible Markov chain guarantees the existence of a unique stationary distribution, while
56+
* an ergodic Markov chain generates time series that satisfy a version of the
57+
law of large numbers.
5358

5459
Together, these concepts provide a foundation for understanding the long-term behavior of Markov chains.
5560

@@ -263,17 +268,19 @@ In view of our latest (ergodicity) result, it is also the fraction of time that
263268

264269
Thus, in the long run, cross-sectional averages for a population and time-series averages for a given person coincide.
265270

266-
This is one aspect of the concept of ergodicity.
271+
This is one aspect of the concept of ergodicity.
267272

268273

269274
(ergo)=
270275
### Example 2
271276

272-
Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed before <mc_eg2>`.
277+
Another example is the Hamilton dynamics we {ref}`discussed before <mc_eg2>`.
273278

274-
The diagram of the Markov chain shows that it is **irreducible**.
279+
The {ref}`graph <mc_eg2>` of the Markov chain shows it is irreducible
275280

276-
Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state
281+
Therefore, we can see the sample path averages for each state (the fraction of
282+
time spent in each state) converges to the stationary distribution regardless of
283+
the starting state
277284

278285
Let's denote the fraction of time spent in state $x$ at time $t$ in our sample path as $\hat p_t(x)$ and compare it with the stationary distribution $\psi^* (x)$
279286

@@ -304,7 +311,7 @@ for i in range(n):
304311
plt.show()
305312
```
306313

307-
Note that the convergence to the stationary distribution regardless of the starting point $x_0$.
314+
Note the convergence to the stationary distribution regardless of the starting point $x_0$.
308315

309316
### Example 3
310317

@@ -324,9 +331,10 @@ P :=
324331
$$
325332

326333

327-
The graph for the chain shows states are densely connected indicating that it is **irreducible**.
334+
The {ref}`graph <mc_eg3>` for the chain shows all states are reachable,
335+
indicating that this chain is irreducible.
328336

329-
Then we visualize the difference between $\hat p_t(x)$ and the stationary distribution $\psi^* (x)$
337+
Here we visualize the difference between $\hat p_t(x)$ and the stationary distribution $\psi^* (x)$ for each state $x$
330338

331339
```{code-cell} ipython3
332340
P = [[0.86, 0.11, 0.03, 0.00, 0.00, 0.00],
@@ -357,7 +365,8 @@ ax.legend()
357365
plt.show()
358366
```
359367

360-
Similar to previous examples, the sample path averages for each state converge to the stationary distribution as the trend converge towards 0.
368+
Similar to previous examples, the sample path averages for each state converge
369+
to the stationary distribution.
361370

362371
### Example 4
363372

@@ -389,7 +398,8 @@ dot.edge("1", "0", label="1.0", color='red')
389398
dot
390399
```
391400

392-
Unlike other Markov chains we have seen before, it has a periodic cycle --- the state cycles between the two states in a regular way.
401+
402+
In fact it has a periodic cycle --- the state cycles between the two states in a regular way.
393403

394404
This is called [periodicity](https://www.randomservices.org/random/markov/Periodicity.html).
395405

@@ -606,7 +616,7 @@ The result should be similar to the plot we plotted [here](ergo)
606616

607617
We will address this exercise graphically.
608618

609-
The plots show the time series of $\bar{\{X=x\}}_m - p$ for two initial
619+
The plots show the time series of $\bar X_m - p$ for two initial
610620
conditions.
611621

612622
As $m$ gets large, both series converge to zero.

0 commit comments

Comments
 (0)