Skip to content

Commit a38d340

Browse files
authored
Merge pull request #75 from QuantEcon/ci_1
Fix link checker
2 parents 28b276c + 29e10e3 commit a38d340

File tree

2 files changed

+30
-35
lines changed

2 files changed

+30
-35
lines changed

lectures/_static/quant-econ.bib

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2518,8 +2518,6 @@ @article{benhabib_wealth_2019
25182518
volume = {109},
25192519
issn = {0002-8282},
25202520
shorttitle = {Wealth {Distribution} and {Social} {Mobility} in the {US}},
2521-
url = {https://www.aeaweb.org/articles?id=10.1257/aer.20151684},
2522-
doi = {10.1257/aer.20151684},
25232521
abstract = {We quantitatively identify the factors that drive wealth dynamics in the United States and are consistent with its skewed cross-sectional distribution and with social mobility. We concentrate on three critical factors: (i) skewed earnings, (ii) differential saving rates across wealth levels, and (iii) stochastic idiosyncratic returns to wealth. All of these are fundamental for matching both distribution and mobility. The stochastic process for returns which best fits the cross-sectional distribution of wealth and social mobility in the United States shares several statistical properties with those of the returns to wealth uncovered by Fagereng et al. (2017) from tax records in Norway.},
25242522
language = {en},
25252523
number = {5},
@@ -2535,7 +2533,6 @@ @article{benhabib_wealth_2019
25352533

25362534
@article{cobweb_model,
25372535
ISSN = {10711031},
2538-
URL = {http://www.jstor.org/stable/1236509},
25392536
abstract = {In recent years, economists have become much interested in recursive models. This interest stems from a growing need for long-term economic projections and for forecasting the probable effects of economic programs and policies. In a dynamic world, past and present conditions help shape future conditions. Perhaps the simplest recursive model is the two-dimensional "cobweb diagram," discussed by Ezekiel in 1938. The present paper attempts to generalize the simple cobweb model somewhat. It considers some effects of price supports. It discusses multidimensional cobwebs to describe simultaneous adjustments in prices and outputs of a number of commodities. And it allows for time trends in the variables.},
25402537
author = {Frederick V. Waugh},
25412538
journal = {Journal of Farm Economics},
@@ -2556,8 +2553,6 @@ @article{hog_cycle
25562553
number = {4},
25572554
pages = {842-853},
25582555
doi = {https://doi.org/10.2307/1235116},
2559-
url = {https://onlinelibrary.wiley.com/doi/abs/10.2307/1235116},
2560-
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.2307/1235116},
25612556
abstract = {Abstract A surprisingly regular four year cycle in hogs has become apparent in the past ten years. This regularity presents an unusual opportunity to study the mechanism of the cycle because it suggests the cycle may be inherent within the industry rather than the result of lagged responses to outside influences. The cobweb theorem is often mentioned as a theoretical tool for explaining the hog cycle, although a two year cycle is usually predicted. When the nature of the hog industry is examined, certain factors become apparent which enable the cobweb theorem to serve as a theoretical basis for the present four year cycle.},
25622557
year = {1960}
25632558
}

lectures/markov_chains.md

Lines changed: 30 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ kernelspec:
99
name: python3
1010
---
1111

12-
# Markov Chains
12+
# Markov Chains
1313

1414
In addition to what's in Anaconda, this lecture will need the following libraries:
1515

@@ -24,7 +24,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
2424
Markov chains are a standard way to model time series with some
2525
dependence between observations.
2626

27-
For example,
27+
For example,
2828

2929
* inflation next year depends on inflation this year
3030
* unemployment next month depends on unemployment this month
@@ -34,7 +34,7 @@ Markov chains are one of the workhorse models of economics and finance.
3434
The theory of Markov chains is beautiful and insightful, which is another
3535
excellent reason to study them.
3636

37-
In this introductory lecture, we will
37+
In this introductory lecture, we will
3838

3939
* review some of the key ideas from the theory of Markov chains and
4040
* show how Markov chains appear in some economic applications.
@@ -53,7 +53,7 @@ import numpy as np
5353
In this section we provide the basic definitions and some elementary examples.
5454

5555
(finite_dp_stoch_mat)=
56-
### Stochastic Matrices
56+
### Stochastic Matrices
5757

5858
Recall that a **probability mass function** over $n$ possible outcomes is a
5959
nonnegative $n$-vector $p$ that sums to one.
@@ -93,7 +93,7 @@ Therefore $P^{k+1} \mathbf 1 = P P^k \mathbf 1 = P \mathbf 1 = \mathbf 1$
9393
The proof is done.
9494

9595

96-
### Markov Chains
96+
### Markov Chains
9797

9898
Now we can introduce Markov chains.
9999

@@ -126,7 +126,7 @@ dot.edge("mr", "ng", label="0.145")
126126
dot.edge("mr", "mr", label="0.778")
127127
dot.edge("mr", "sr", label="0.077")
128128
dot.edge("sr", "mr", label="0.508")
129-
129+
130130
dot.edge("sr", "sr", label="0.492")
131131
dot
132132
```
@@ -199,7 +199,7 @@ More generally, for any $i,j$ between 0 and 2, we have
199199
$$
200200
\begin{aligned}
201201
P(i,j)
202-
& = \mathbb P\{X_{t+1} = j \,|\, X_t = i\}
202+
& = \mathbb P\{X_{t+1} = j \,|\, X_t = i\}
203203
\\
204204
& = \text{ probability of transitioning from state $i$ to state $j$ in one month}
205205
\end{aligned}
@@ -234,11 +234,11 @@ For example,
234234

235235
$$
236236
\begin{aligned}
237-
P(0,1)
238-
& =
237+
P(0,1)
238+
& =
239239
\text{ probability of transitioning from state $0$ to state $1$ in one month}
240240
\\
241-
& =
241+
& =
242242
\text{ probability finding a job next month}
243243
\\
244244
& = \alpha
@@ -303,7 +303,7 @@ $$
303303
Going the other way, if we take a stochastic matrix $P$, we can generate a Markov
304304
chain $\{X_t\}$ as follows:
305305

306-
* draw $X_0$ from a marginal distribution $\psi$
306+
* draw $X_0$ from a marginal distribution $\psi$
307307
* for each $t = 0, 1, \ldots$, draw $X_{t+1}$ from $P(X_t,\cdot)$
308308

309309
By construction, the resulting process satisfies {eq}`mpp`.
@@ -458,7 +458,7 @@ mc.simulate_indices(ts_length=4)
458458
```
459459

460460
(mc_md)=
461-
## Marginal Distributions
461+
## Marginal Distributions
462462

463463
Suppose that
464464

@@ -827,7 +827,7 @@ mc.stationary_distributions # Show all stationary distributions
827827
```
828828

829829
(ergodicity)=
830-
## Ergodicity
830+
## Ergodicity
831831

832832
Under irreducibility, yet another important result obtains:
833833

@@ -900,7 +900,7 @@ Another example is Hamilton {cite}`Hamilton2005` dynamics {ref}`discussed above
900900

901901
The diagram of the Markov chain shows that it is **irreducible**.
902902

903-
Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state
903+
Therefore, we can see the sample path averages for each state (the fraction of time spent in each state) converges to the stationary distribution regardless of the starting state
904904

905905
```{code-cell} ipython3
906906
P = np.array([[0.971, 0.029, 0.000],
@@ -915,7 +915,7 @@ plt.subplots_adjust(wspace=0.35)
915915
for i in range(n_state):
916916
axes[i].grid()
917917
axes[i].set_ylim(ψ_star[i]-0.2, ψ_star[i]+0.2)
918-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
918+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
919919
label = fr'$\psi^*(X={i})$')
920920
axes[i].set_xlabel('t')
921921
axes[i].set_ylabel(fr'average time spent at X={i}')
@@ -962,7 +962,7 @@ dot
962962

963963
As you might notice, unlike other Markov chains we have seen before, it has a periodic cycle.
964964

965-
This is formally called [periodicity](https://stats.libretexts.org/Bookshelves/Probability_Theory/Probability_Mathematical_Statistics_and_Stochastic_Processes_(Siegrist)/16:_Markov_Processes/16.05:_Periodicity_of_Discrete-Time_Chains#:~:text=A%20state%20in%20a%20discrete,limiting%20behavior%20of%20the%20chain.).
965+
This is formally called [periodicity](https://www.randomservices.org/random/markov/Periodicity.html).
966966

967967
We will not go into the details of periodicity.
968968

@@ -979,7 +979,7 @@ fig, axes = plt.subplots(nrows=1, ncols=n_state)
979979
for i in range(n_state):
980980
axes[i].grid()
981981
axes[i].set_ylim(0.45, 0.55)
982-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
982+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
983983
label = fr'$\psi^*(X={i})$')
984984
axes[i].set_xlabel('t')
985985
axes[i].set_ylabel(fr'average time spent at X={i}')
@@ -1016,7 +1016,7 @@ strictly positive, then $P$ has only one stationary distribution $\psi^*$ and
10161016
$$
10171017
\psi_0 P^t \to \psi
10181018
\quad \text{as } t \to \infty
1019-
$$
1019+
$$
10201020
10211021
10221022
(See, for example, {cite}`haggstrom2002finite`. Our assumptions imply that $P$
@@ -1090,24 +1090,24 @@ plt.subplots_adjust(wspace=0.35)
10901090
x0s = np.ones((n, n_state))
10911091
for i in range(n):
10921092
draws = np.random.randint(1, 10_000_000, size=n_state)
1093-
1093+
10941094
# Scale them so that they add up into 1
10951095
x0s[i,:] = np.array(draws/sum(draws))
10961096
10971097
# Loop through many initial values
10981098
for x0 in x0s:
10991099
x = x0
11001100
X = np.zeros((n,n_state))
1101-
1101+
11021102
# Obtain and plot distributions at each state
11031103
for t in range(0, n):
1104-
x = x @ P
1104+
x = x @ P
11051105
X[t] = x
11061106
for i in range(n_state):
11071107
axes[i].plot(range(0, n), X[:,i], alpha=0.3)
1108-
1108+
11091109
for i in range(n_state):
1110-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
1110+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
11111111
label = fr'$\psi^*(X={i})$')
11121112
axes[i].set_xlabel('t')
11131113
axes[i].set_ylabel(fr'$\psi(X={i})$')
@@ -1139,13 +1139,13 @@ for i in range(n):
11391139
for x0 in x0s:
11401140
x = x0
11411141
X = np.zeros((n,n_state))
1142-
1142+
11431143
for t in range(0, n):
11441144
x = x @ P
11451145
X[t] = x
11461146
for i in range(n_state):
11471147
axes[i].plot(range(20, n), X[20:,i], alpha=0.3)
1148-
1148+
11491149
for i in range(n_state):
11501150
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^* (X={i})$')
11511151
axes[i].set_xlabel('t')
@@ -1260,7 +1260,7 @@ TODO -- connect to the Neumann series lemma (Maanasee)
12601260

12611261
## Exercises
12621262

1263-
````{exercise}
1263+
````{exercise}
12641264
:label: mc_ex1
12651265
12661266
Benhabib el al. {cite}`benhabib_wealth_2019` estimated that the transition matrix for social mobility as the following
@@ -1291,7 +1291,7 @@ P_B = np.array(P_B)
12911291
codes_B = ( '1','2','3','4','5','6','7','8')
12921292
```
12931293
1294-
In this exercise,
1294+
In this exercise,
12951295
12961296
1. show this process is asymptotically stationary and calculate the stationary distribution using simulations.
12971297
@@ -1323,7 +1323,7 @@ codes_B = ( '1','2','3','4','5','6','7','8')
13231323
np.linalg.matrix_power(P_B, 10)
13241324
```
13251325

1326-
We find rows transition matrix converge to the stationary distribution
1326+
We find rows transition matrix converge to the stationary distribution
13271327

13281328
```{code-cell} ipython3
13291329
mc = qe.MarkovChain(P_B)
@@ -1360,7 +1360,7 @@ We can see that the time spent at each state quickly converges to the stationary
13601360
```
13611361

13621362

1363-
```{exercise}
1363+
```{exercise}
13641364
:label: mc_ex2
13651365
13661366
According to the discussion {ref}`above <mc_eg1-2>`, if a worker's employment dynamics obey the stochastic matrix
@@ -1443,7 +1443,7 @@ plt.show()
14431443
```{solution-end}
14441444
```
14451445

1446-
```{exercise}
1446+
```{exercise}
14471447
:label: mc_ex3
14481448
14491449
In `quantecon` library, irreducibility is tested by checking whether the chain forms a [strongly connected component](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.components.is_strongly_connected.html).

0 commit comments

Comments
 (0)