Skip to content

Commit 62b5201

Browse files
committed
misc
Misc edits
1 parent ef9a3db commit 62b5201

File tree

5 files changed

+153
-50
lines changed

5 files changed

+153
-50
lines changed

lectures/_config.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
title: Introductory Computational Economics and Finance
1+
title: An Introduction to Quantitative Economics and Finance
22
author: Thomas J. Sargent & John Stachurski
33
logo: _static/qe-logo.png
4-
description: This website presents an undergraduate set of lectures computational economics, designed and written by Thomas J. Sargent and John Stachurski.
4+
description: This website presents introductory lectures on computational economics, designed and written by Thomas J. Sargent and John Stachurski.
55

66
parse:
77
myst_enable_extensions: # default extensions to enable in the myst parser. See https://myst-parser.readthedocs.io/en/latest/using/syntax-optional.html

lectures/_toc.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@ parts:
44
- caption: Introduction
55
numbered: true
66
chapters:
7-
- file: intro_economics
7+
- file: about
8+
- file: long_run_growth
89
- file: inequality
910
- caption: Tools & Techniques
1011
numbered: true

lectures/about.md

Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
# About these Lectures
2+
3+
4+
## About
5+
6+
This lecture series introduces quantitative economics using elementary
7+
mathematics and statistics plus computer code written in
8+
[Python](https://www.python.org/).
9+
10+
The lectures emphasize simulation and visualization through code as a way to
11+
convey ideas, rather than focusing on mathematical details.
12+
13+
Although the presentation is quite novel, the ideas are rather foundational.
14+
15+
We emphasize the deep and fundamental importance of economic theory, as well
16+
as the value of analyzing data and understanding stylized facts.
17+
18+
The lectures can be used for university courses, self-study, reading groups or
19+
workshops.
20+
21+
Researchers and policy professionals might also find some parts of the series
22+
valuable for their work.
23+
24+
We hope the lectures will be of interest to students of economics
25+
who want to learn both economics and computing, as well as students from
26+
fields such as computer science and engineering who are curious about
27+
economics.
28+
29+
## Level
30+
31+
The lecture series is aimed at undergraduate students.
32+
33+
The level of the lectures varies from truly introductory (suitable for first
34+
year undergraduates or even high school students) to more intermediate.
35+
36+
The
37+
more intermediate lectures require comfort with linear algebra and some
38+
mathematical maturity (e.g., calmly reading theorems and trying to understand
39+
their meaning).
40+
41+
In general, easier lectures occur earlier in the lecture
42+
series and harder lectures occur later.
43+
44+
We assume that readers have covered the easier parts of the QuantEcon lecture
45+
series [on Python
46+
programming](https://python-programming.quantecon.org/intro.html).
47+
48+
In
49+
particular, readers should be familiar with basic Python syntax including
50+
Python functions. Knowledge of classes and Matplotlib will be beneficial but
51+
not essential.
52+
53+
## Credits
54+
55+
In building this lecture series, we had invaluable assistance from research
56+
assistants at QuantEcon, as well as our QuantEcon colleages. Without their
57+
help this series would not have been possible.
58+
59+
In particular, we sincerely thank and give credit to
60+
61+
- [Aakash Gupta](https://github.com/AakashGfude)
62+
- [Shu Hu](https://github.com/shlff)
63+
- [Smit Lunagariya](https://github.com/Smit-create)
64+
- [Matthew McKay](https://github.com/mmcky)
65+
- [Maanasee Sharma](https://github.com/maanasee)
66+
- [Humphrey Yang](https://github.com/HumphreyYang)
67+
68+
We also thank Noritaka Kudoh for encouraging us to start this project and providing thoughtful suggestions.
69+

lectures/intro_economics.md renamed to lectures/long_run_growth.md

Lines changed: 49 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ kernelspec:
99
name: python3
1010
---
1111

12-
# Introduction to Economics
12+
# Long Run Growth
1313

1414
```{index} single: Introduction to Economics
1515
```
@@ -18,10 +18,33 @@ kernelspec:
1818
:depth: 2
1919
```
2020

21-
## World Bank Data - GDP Per Capita (Current US$)
21+
## Overview
2222

23+
This lecture is about how different economies grow over the long run.
2324

24-
GDP per capita is gross domestic product divided by midyear population. GDP is the sum of gross value added by all resident producers in the economy plus any product taxes and minus any subsidies not included in the value of the products. It is calculated without making deductions for depreciation of fabricated assets or for depletion and degradation of natural resources. Data are in current U.S. dollars.
25+
As we will see, some countries have had very different growth experiences
26+
since the end of WWII.
27+
28+
References:
29+
30+
* https://www.imf.org/en/Publications/fandd/issues/Series/Back-to-Basics/gross-domestic-product-GDP
31+
* https://www.stlouisfed.org/open-vault/2019/march/what-is-gdp-why-important
32+
* https://wol.iza.org/articles/gross-domestic-product-are-other-measures-needed
33+
34+
35+
One drawback of focusing on GDP growth is that it makes no allowance for
36+
depletion and degradation of natural resources.
37+
38+
39+
GDP per capita is gross domestic product divided by population.
40+
41+
GDP is the sum of gross value added by all resident producers in the economy
42+
plus any product taxes and minus any subsidies not included in the value of
43+
the products.
44+
45+
We use World Bank data on GPD per capita in current U.S. dollars.
46+
47+
We require the following imports.
2548

2649
```{code-cell} ipython3
2750
import pandas as pd
@@ -31,26 +54,13 @@ import matplotlib.pyplot as plt
3154
import numpy as np
3255
```
3356

34-
```{code-cell} ipython3
35-
wbi = pd.read_csv("datasets/GDP_per_capita_world_bank.csv")
36-
```
3757

38-
## Histogram comparison between 1960, 1990, 2020
58+
The following code reads in the data into a pandas data frame.
3959

4060
```{code-cell} ipython3
41-
def get_log_hist(data, years):
42-
filtered_data = data.filter(items=['Country Code', years[0], years[1], years[2]])
43-
log_gdp = filtered_data.iloc[:,1:].transform(lambda x: np.log(x))
44-
max_log_gdp = log_gdp.max(numeric_only=True).max()
45-
min_log_gdp = log_gdp.min(numeric_only=True).min()
46-
log_gdp.hist(bins=16, range=[min_log_gdp, max_log_gdp], log=True)
61+
wbi = pd.read_csv("datasets/GDP_per_capita_world_bank.csv")
4762
```
4863

49-
```{code-cell} ipython3
50-
## All countries
51-
wbiall = wbi.drop(['Country Name' , 'Indicator Name', 'Indicator Code'], axis=1)
52-
get_log_hist(wbiall, ['1960', '1990', '2020'])
53-
```
5464

5565
## Comparison of GDP between different Income Groups
5666

@@ -112,7 +122,7 @@ wbi_filtered_umi_lmi = filter_country_list_data(wbi, country_list_umi_lmi)
112122
wbi_filtered_umi_lmi.plot()
113123
```
114124

115-
### Plot for lower middle income
125+
Here is a plot for lower middle income countries.
116126

117127
```{code-cell} ipython3
118128
# Vietnam, Pakistan (Lower middle income)
@@ -121,11 +131,30 @@ wbi_filtered_lmi = filter_country_list_data(wbi, country_list_lmi)
121131
wbi_filtered_lmi.plot()
122132
```
123133

124-
### Plot for lower middle income and low income
134+
Here is a plot for lower middle income and low income
125135

126136
```{code-cell} ipython3
127137
# Pakistan, Congo (Lower middle income, low income)
128138
country_list_lmi_li = ['PAK', 'COD']
129139
wbi_filtered_lmi = filter_country_list_data(wbi, country_list_lmi_li)
130140
wbi_filtered_lmi.plot()
131141
```
142+
143+
144+
145+
## Histogram comparison between 1960, 1990, 2020
146+
147+
```{code-cell} ipython3
148+
def get_log_hist(data, years):
149+
filtered_data = data.filter(items=['Country Code', years[0], years[1], years[2]])
150+
log_gdp = filtered_data.iloc[:,1:].transform(lambda x: np.log(x))
151+
max_log_gdp = log_gdp.max(numeric_only=True).max()
152+
min_log_gdp = log_gdp.min(numeric_only=True).min()
153+
log_gdp.hist(bins=16, range=[min_log_gdp, max_log_gdp], log=True)
154+
```
155+
156+
```{code-cell} ipython3
157+
## All countries
158+
wbiall = wbi.drop(['Country Name' , 'Indicator Name', 'Indicator Code'], axis=1)
159+
get_log_hist(wbiall, ['1960', '1990', '2020'])
160+
```

lectures/markov_chains.md

Lines changed: 31 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -255,6 +255,8 @@ Then we can address a range of questions, such as
255255

256256
We'll cover such applications below.
257257

258+
259+
258260
### Defining Markov Chains
259261

260262
So far we've given examples of Markov chains but now let's define them more
@@ -306,6 +308,9 @@ chain $\{X_t\}$ as follows:
306308

307309
By construction, the resulting process satisfies {eq}`mpp`.
308310

311+
312+
313+
309314
## Simulation
310315

311316
```{index} single: Markov Chains; Simulation
@@ -859,8 +864,10 @@ Importantly, the result is valid for any choice of $\psi_0$.
859864

860865
Notice that the theorem is related to the law of large numbers.
861866

867+
TODO -- link to our undergrad lln and clt lecture
868+
862869
It tells us that, in some settings, the law of large numbers sometimes holds even when the
863-
sequence of random variables is [not IID](iid_violation).
870+
sequence of random variables is not IID.
864871

865872

866873
(mc_eg1-2)=
@@ -905,15 +912,15 @@ n_state = P.shape[1]
905912
fig, axes = plt.subplots(nrows=1, ncols=n_state)
906913
ψ_star = mc.stationary_distributions[0]
907914
plt.subplots_adjust(wspace=0.35)
908-
909915
for i in range(n_state):
910916
axes[i].grid()
911-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
912-
label = fr'$\psi^*({i})$')
917+
axes[i].set_ylim(ψ_star[i]-0.2, ψ_star[i]+0.2)
918+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
919+
label = fr'$\psi^*(X={i})$')
913920
axes[i].set_xlabel('t')
914-
axes[i].set_ylabel(f'fraction of time spent at {i}')
921+
axes[i].set_ylabel(fr'average time spent at X={i}')
915922
916-
# Compute the fraction of time spent, starting from different x_0s
923+
# Compute the fraction of time spent, for each X=x
917924
for x0, col in ((0, 'blue'), (1, 'green'), (2, 'red')):
918925
# Generate time series that starts at different x0
919926
X = mc.simulate(n, init=x0)
@@ -942,8 +949,6 @@ $$
942949
The diagram of the Markov chain shows that it is **irreducible**
943950

944951
```{code-cell} ipython3
945-
:tags: [hide-input]
946-
947952
dot = Digraph(comment='Graph')
948953
dot.attr(rankdir='LR')
949954
dot.node("0")
@@ -971,16 +976,15 @@ mc = MarkovChain(P)
971976
n_state = P.shape[1]
972977
fig, axes = plt.subplots(nrows=1, ncols=n_state)
973978
ψ_star = mc.stationary_distributions[0]
974-
975979
for i in range(n_state):
976980
axes[i].grid()
977981
axes[i].set_ylim(0.45, 0.55)
978-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
979-
label = fr'$\psi^*({i})$')
982+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
983+
label = fr'$\psi^*(X={i})$')
980984
axes[i].set_xlabel('t')
981-
axes[i].set_ylabel(f'fraction of time spent at {i}')
985+
axes[i].set_ylabel(fr'average time spent at X={i}')
982986
983-
# Compute the fraction of time spent, for each x
987+
# Compute the fraction of time spent, for each X=x
984988
for x0 in range(n_state):
985989
# Generate time series starting at different x_0
986990
X = mc.simulate(n, init=x0)
@@ -1074,7 +1078,6 @@ In the case of Hamilton's Markov chain, the distribution $\psi P^t$ converges to
10741078
P = np.array([[0.971, 0.029, 0.000],
10751079
[0.145, 0.778, 0.077],
10761080
[0.000, 0.508, 0.492]])
1077-
10781081
# Define the number of iterations
10791082
n = 50
10801083
n_state = P.shape[0]
@@ -1094,8 +1097,8 @@ for i in range(n):
10941097
# Loop through many initial values
10951098
for x0 in x0s:
10961099
x = x0
1097-
X = np.zeros((n, n_state))
1098-
1100+
X = np.zeros((n,n_state))
1101+
10991102
# Obtain and plot distributions at each state
11001103
for t in range(0, n):
11011104
x = x @ P
@@ -1104,10 +1107,10 @@ for x0 in x0s:
11041107
axes[i].plot(range(0, n), X[:,i], alpha=0.3)
11051108
11061109
for i in range(n_state):
1107-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
1108-
label = fr'$\psi^*({i})$')
1110+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black',
1111+
label = fr'$\psi^*(X={i})$')
11091112
axes[i].set_xlabel('t')
1110-
axes[i].set_ylabel(fr'$\psi({i})$')
1113+
axes[i].set_ylabel(fr'$\psi(X={i})$')
11111114
axes[i].legend()
11121115
11131116
plt.show()
@@ -1144,9 +1147,9 @@ for x0 in x0s:
11441147
axes[i].plot(range(20, n), X[20:,i], alpha=0.3)
11451148
11461149
for i in range(n_state):
1147-
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^*({i})$')
1150+
axes[i].axhline(ψ_star[i], linestyle='dashed', lw=2, color = 'black', label = fr'$\psi^* (X={i})$')
11481151
axes[i].set_xlabel('t')
1149-
axes[i].set_ylabel(fr'$\psi({i})$')
1152+
axes[i].set_ylabel(fr'$\psi(X={i})$')
11501153
axes[i].legend()
11511154
11521155
plt.show()
@@ -1292,7 +1295,7 @@ In this exercise,
12921295
12931296
1. show this process is asymptotically stationary and calculate the stationary distribution using simulations.
12941297
1295-
1. use simulations to demonstrate ergodicity of this process.
1298+
1. use simulation to show ergodicity.
12961299
12971300
````
12981301

@@ -1320,7 +1323,7 @@ codes_B = ( '1','2','3','4','5','6','7','8')
13201323
np.linalg.matrix_power(P_B, 10)
13211324
```
13221325

1323-
We find that rows of the transition matrix converge to the stationary distribution
1326+
We find rows transition matrix converge to the stationary distribution
13241327

13251328
```{code-cell} ipython3
13261329
mc = qe.MarkovChain(P_B)
@@ -1341,17 +1344,17 @@ ax.axhline(0, linestyle='dashed', lw=2, color = 'black', alpha=0.4)
13411344
13421345
13431346
for x0 in range(8):
1344-
# Calculate the fraction of time for each worker
1347+
# Calculate the average time for each worker
13451348
X_bar = (X == x0).cumsum() / (1 + np.arange(N, dtype=float))
13461349
ax.plot(X_bar - ψ_star[x0], label=f'$X = {x0+1} $')
13471350
ax.set_xlabel('t')
1348-
ax.set_ylabel(r'fraction of time spent in a state $- \psi^* (x)$')
1351+
ax.set_ylabel(fr'average time spent in a state $- \psi^* (X=x)$')
13491352
13501353
ax.legend()
13511354
plt.show()
13521355
```
13531356

1354-
Note that the fraction of time spent at each state quickly converges to the probability assigned to that state by the stationary distribution.
1357+
We can see that the time spent at each state quickly converges to the stationary distribution.
13551358

13561359
```{solution-end}
13571360
```
@@ -1449,9 +1452,10 @@ However, another way to verify irreducibility is by checking whether $A$ satisfi
14491452
14501453
Assume A is an $n \times n$ $A$ is irreducible if and only if $\sum_{k=0}^{n-1}A^k$ is a positive matrix.
14511454
1452-
(see more: {cite}`zhao_power_2012` and [here](https://math.stackexchange.com/questions/3336616/how-to-prove-this-matrix-is-a-irreducible-matrix))
1455+
(see more at \cite{zhao_power_2012} and [here](https://math.stackexchange.com/questions/3336616/how-to-prove-this-matrix-is-a-irreducible-matrix))
14531456
14541457
Based on this claim, write a function to test irreducibility.
1458+
14551459
```
14561460

14571461
```{solution-start} mc_ex3

0 commit comments

Comments
 (0)