@@ -61,6 +61,8 @@ import matplotlib as mpl
6161from mpl_toolkits.mplot3d import Axes3D
6262from matplotlib.animation import FuncAnimation
6363from IPython.display import HTML
64+ from matplotlib.patches import Polygon
65+ from mpl_toolkits.mplot3d.art3d import Poly3DCollection
6466```
6567
6668## Definitions and examples
@@ -743,6 +745,11 @@ This is, in some sense, a steady state probability of unemployment.
743745
744746Not surprisingly it tends to zero as $\beta \to 0$, and to one as $\alpha \to 0$.
745747
748+
749+
750+
751+
752+
746753### Calculating stationary distributions
747754
748755A stable algorithm for computing stationary distributions is implemented in [ QuantEcon.py] ( http://quantecon.org/quantecon-py ) .
@@ -757,6 +764,11 @@ mc = qe.MarkovChain(P)
757764mc.stationary_distributions # Show all stationary distributions
758765```
759766
767+
768+
769+
770+
771+
760772### Asymptotic stationarity
761773
762774Consider an everywhere positive stochastic matrix with unique stationary distribution $\psi^* $.
@@ -767,17 +779,24 @@ For example, we have the following result
767779
768780(strict_stationary)=
769781``` {prf:theorem}
782+ :label: mc_gs_thm
783+
770784If there exists an integer $m$ such that all entries of $P^m$ are
771- strictly positive, with unique stationary distribution $\psi^*$, then
785+ strictly positive, then
772786
773787$$
774788 \psi_0 P^t \to \psi^*
775789 \quad \text{ as } t \to \infty
776790$$
791+
792+ where $\psi^*$ is the unique stationary distribution.
777793```
778794
795+ This situation is often referred to as ** asymptotic stationarity** or ** global stability** .
796+
797+ A proof of the theorem can be found in Chapter 4 of {cite}` sargent2023economic ` , as well as many other sources.
798+
779799
780- See, for example, {cite}` sargent2023economic ` Chapter 4.
781800
782801
783802
@@ -848,103 +867,96 @@ Here
848867You might like to try experimenting with different initial conditions.
849868
850869
851- #### An alternative illustration
852870
853- We can show this in a slightly different way by focusing on the probability that $\psi_t$ puts on each state.
854871
855- First, we write a function to draw initial distributions $\psi_0$ of size ` num_distributions `
856-
857- ``` {code-cell} ipython3
858- def generate_initial_values(num_distributions):
859- n = len(P)
860-
861- draws = np.random.randint(1, 10_000_000, size=(num_distributions,n))
862- ψ_0s = draws/draws.sum(axis=1)[:, None]
863-
864- return ψ_0s
865- ```
872+ #### Example: failure of convergence
866873
867- We then write a function to plot the dynamics of $(\psi_0 P^t)(i)$ as $t$ gets large, for each state $i$ with different initial distributions
868874
869- ``` {code-cell} ipython3
870- def plot_distribution(P, ts_length, num_distributions):
875+ Consider the periodic chain with stochastic matrix
871876
872- # Get parameters of transition matrix
873- n = len(P)
874- mc = qe.MarkovChain(P)
875- ψ_star = mc.stationary_distributions[0]
877+ $$
878+ P =
879+ \begin{bmatrix}
880+ 0 & 1 \\
881+ 1 & 0 \\
882+ \end{bmatrix}
883+ $$
876884
877- ## Draw the plot
878- fig, axes = plt.subplots(nrows=1, ncols=n, figsize=[11, 5])
879- plt.subplots_adjust(wspace=0.35)
885+ This matrix does not satisfy the conditions of
886+ {ref}` strict_stationary ` because, as you can readily check,
880887
881- ψ_0s = generate_initial_values(num_distributions)
888+ * $P^m = P$ when $m$ is odd and
889+ * $P^m = I$, the identity matrix, when $m$ is even.
882890
883- # Get the path for each starting value
884- for ψ_0 in ψ_0s:
885- ψ_t = iterate_ψ(ψ_0, P, ts_length)
891+ Hence there is no $m$ such that all elements of $P^m$ are strictly positive.
886892
887- # Obtain and plot distributions at each state
888- for i in range(n):
889- axes[i].plot(range(0, ts_length), ψ_t[:,i], alpha=0.3)
893+ Moreover, we can see that global stability does not hold.
890894
891- # Add labels
892- for i in range(n):
893- axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
894- label = fr'$\psi^*({i})$')
895- axes[i].set_xlabel('t')
896- axes[i].set_ylabel(fr'$\psi_t({i})$')
897- axes[i].legend()
895+ For instance, if we start at $\psi_0 = (1,0)$, then $\psi_m = \psi_0 P^m$ is $(1, 0)$ when $m$ is even and $(0,1)$ when $m$ is odd.
898896
899- plt.show()
900- ```
897+ We can see similar phenomena in higher dimensions.
901898
902- The following figure shows
899+ The next figure illustrates this for a periodic Markov chain with three states.
903900
904901``` {code-cell} ipython3
905- # Define the number of iterations
906- # and initial distributions
907- ts_length = 50
908- num_distributions = 25
909-
910- P = np.array([[0.971, 0.029, 0.000],
911- [0.145, 0.778, 0.077],
912- [0.000, 0.508, 0.492]])
913-
914- plot_distribution(P, ts_length, num_distributions)
915- ```
916-
917- The convergence to $\psi^* $ holds for different initial distributions.
902+ ψ_1 = (0.0, 0.0, 1.0)
903+ ψ_2 = (0.5, 0.5, 0.0)
904+ ψ_3 = (0.25, 0.25, 0.5)
905+ ψ_4 = (1/3, 1/3, 1/3)
918906
907+ P = np.array([[0.0, 1.0, 0.0],
908+ [0.0, 0.0, 1.0],
909+ [1.0, 0.0, 0.0]])
919910
911+ fig = plt.figure()
912+ ax = fig.add_subplot(projection='3d')
913+ colors = ['red','yellow', 'green', 'blue'] # Different colors for each initial point
920914
921- #### Example: failure of convergence
922-
915+ # Define the vertices of the unit simplex
916+ v = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0]])
923917
924- In the case of a periodic chain, with
918+ # Define the faces of the unit simplex
919+ faces = [
920+ [v[0], v[1], v[2]],
921+ [v[0], v[1], v[3]],
922+ [v[0], v[2], v[3]],
923+ [v[1], v[2], v[3]]
924+ ]
925925
926- $$
927- P =
928- \begin{bmatrix}
929- 0 & 1 \\
930- 1 & 0 \\
931- \end{bmatrix}
932- $$
926+ def update(n):
927+ ax.clear()
928+ ax.set_xlim([0, 1])
929+ ax.set_ylim([0, 1])
930+ ax.set_zlim([0, 1])
931+ ax.view_init(45, 45)
932+
933+ # Plot the 3D unit simplex as planes
934+ simplex = Poly3DCollection(faces,alpha=0.05)
935+ ax.add_collection3d(simplex)
936+
937+ for idx, ψ_0 in enumerate([ψ_1, ψ_2, ψ_3, ψ_4]):
938+ ψ_t = iterate_ψ(ψ_0, P, n+1)
939+
940+ point = ψ_t[-1]
941+ ax.scatter(point[0], point[1], point[2], color=colors[idx], s=60)
942+ points = np.array(ψ_t)
943+ ax.plot(points[:, 0], points[:, 1], points[:, 2], color=colors[idx],linewidth=0.75)
944+
945+ return fig,
933946
934- we find the distribution oscillates
947+ anim = FuncAnimation(fig, update, frames=range(20), blit=False, repeat=False)
948+ plt.close()
949+ HTML(anim.to_jshtml())
950+ ```
951+ This animation demonstrates the behavior of an irreducible and periodic stochastic matrix.
935952
936- ``` {code-cell} ipython3
937- P = np.array([[0, 1],
938- [1, 0]])
953+ The red, yellow, and green dots represent different initial probability distributions.
939954
940- ts_length = 20
941- num_distributions = 30
955+ The blue dot represents the unique stationary distribution.
942956
943- plot_distribution(P, ts_length, num_distributions)
944- ```
957+ Unlike Hamilton’s Markov chain, these initial distributions do not converge to the unique stationary distribution.
945958
946- Indeed, this $P$ fails our asymptotic stationarity condition, since, as you can
947- verify, $P^t$ is not everywhere positive for any $t$.
959+ Instead, they cycle periodically around the probability simplex, illustrating that asymptotic stability fails.
948960
949961
950962(finite_mc_expec)=
@@ -1105,42 +1117,6 @@ mc = qe.MarkovChain(P)
11051117ψ_star
11061118```
11071119
1108- Solution 3:
1109-
1110- We find the distribution $\psi$ converges to the stationary distribution more quickly compared to the {ref}` hamilton's chain <hamilton> ` .
1111-
1112- ``` {code-cell} ipython3
1113- ts_length = 10
1114- num_distributions = 25
1115- plot_distribution(P, ts_length, num_distributions)
1116- ```
1117-
1118- In fact, the rate of convergence is governed by {ref}` eigenvalues<eigen> ` {cite}` sargent2023economic ` .
1119-
1120- ``` {code-cell} ipython3
1121- P_eigenvals = np.linalg.eigvals(P)
1122- P_eigenvals
1123- ```
1124-
1125- ``` {code-cell} ipython3
1126- P_hamilton = np.array([[0.971, 0.029, 0.000],
1127- [0.145, 0.778, 0.077],
1128- [0.000, 0.508, 0.492]])
1129-
1130- hamilton_eigenvals = np.linalg.eigvals(P_hamilton)
1131- hamilton_eigenvals
1132- ```
1133-
1134- More specifically, it is governed by the spectral gap, the difference between the largest and the second largest eigenvalue.
1135-
1136- ``` {code-cell} ipython3
1137- sp_gap_P = P_eigenvals[0] - np.diff(P_eigenvals)[0]
1138- sp_gap_hamilton = hamilton_eigenvals[0] - np.diff(hamilton_eigenvals)[0]
1139-
1140- sp_gap_P > sp_gap_hamilton
1141- ```
1142-
1143- We will come back to this when we discuss {ref}` spectral theory<spec_markov> ` .
11441120
11451121``` {solution-end}
11461122```
0 commit comments