Skip to content

Commit 70b199e

Browse files
committed
first commit
1 parent 04da5c8 commit 70b199e

File tree

3 files changed

+307
-283
lines changed

3 files changed

+307
-283
lines changed

lectures/_toc.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ parts:
2222
- file: short_path
2323
- file: scalar_dynam
2424
- file: linear_equations
25+
- file: eigen
2526
- file: lp_intro
2627
- file: lln_clt
2728
- file: monte_carlo
@@ -30,7 +31,7 @@ parts:
3031
numbered: true
3132
chapters:
3233
- file: simple_linear_regression
33-
- file: eigen
34+
- file: eigen_II
3435
- caption: Models
3536
numbered: true
3637
chapters:

lectures/eigen.md

Lines changed: 1 addition & 282 deletions
Original file line numberDiff line numberDiff line change
@@ -833,285 +833,4 @@ to one.
833833

834834
The eigenvectors and eigenvalues of a map $A$ determine how a vector $v$ is transformed when we repeatedly multiply by $A$.
835835

836-
This is discussed further below.
837-
838-
839-
840-
## Nonnegative Matrices
841-
842-
Often, in economics, the matrix that we are dealing with is nonnegative.
843-
844-
Nonnegative matrices have several special and useful properties.
845-
846-
In this section we discuss some of them --- in particular, the connection
847-
between nonnegativity and eigenvalues.
848-
849-
850-
### Nonnegative Matrices
851-
852-
An $n \times m$ matrix $A$ is called **nonnegative** if every element of $A$
853-
is nonnegative, i.e., $a_{ij} \geq 0$ for every $i,j$.
854-
855-
We denote this as $A \geq 0$.
856-
857-
858-
### Irreducible Matrices
859-
860-
Let $A$ be a square nonnegative matrix and let $A^k$ be the $k^{th}$ power of A.
861-
862-
Let $a^{k}_{ij}$ be element $(i,j)$ of $A^k$.
863-
864-
$A$ is called **irreducible** if for each $(i,j)$ there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$.
865-
866-
A matrix $A$ that is not irreducible is called reducible.
867-
868-
Here are some examples to illustrate this further.
869-
870-
1. $A = \begin{bmatrix} 0.5 & 0.1 \\ 0.2 & 0.2 \end{bmatrix}$ is irreducible since $a_{ij}>0$ for all $(i,j)$.
871-
872-
2. $A = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$ is irreducible since $a_{12},a_{21} >0$ and $a^{2}_{11},a^{2}_{22} >0$.
873-
874-
3. $A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ is reducible since $A^k = A$ for all $k \geq 0$ and thus
875-
$a^{k}_{12},a^{k}_{21} = 0$ for all $k \geq 0$.
876-
877-
878-
### The Perron-Frobenius Theorem
879-
880-
For a nonnegative matrix $A$ the behaviour of $A^k$ as $k \to \infty$ is controlled by the eigenvalue with the largest
881-
absolute value, often called the **dominant eigenvalue**.
882-
883-
For a matrix $A$, the Perron-Frobenius theorem characterises certain
884-
properties of the dominant eigenvalue and its corresponding eigenvector when
885-
$A$ is a nonnegative square matrix.
886-
887-
```{prf:theorem} Perron-Frobenius Theorem
888-
:label: perron-frobenius
889-
890-
If a matrix $A \geq 0$ then,
891-
892-
1. the dominant eigenvalue of $A$, $r(A)$, is real-valued and nonnegative.
893-
2. for any other eigenvalue (possibly complex) $\lambda$ of $A$, $|\lambda| \leq r(A)$.
894-
3. we can find a nonnegative and nonzero eigenvector $v$ such that $Av = r(A)v$.
895-
896-
Moreover if $A$ is also irreducible then,
897-
898-
4. the eigenvector $v$ associated with the eigenvalue $r(A)$ is strictly positive.
899-
5. there exists no other positive eigenvector $v$ (except scalar multiples of v) associated with $r(A)$.
900-
901-
```
902-
903-
(This is a relatively simple version of the theorem --- for more details see
904-
[here](https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem)).
905-
906-
We will see applications of the theorem below.
907-
908-
909-
(la_neumann)=
910-
## The Neumann Series Lemma
911-
912-
```{index} single: Neumann's Lemma
913-
```
914-
915-
In this section we present a famous result about series of matrices that has
916-
many applications in economics.
917-
918-
### Scalar Series
919-
920-
Here's a fundamental result about series that you surely know:
921-
922-
If $a$ is a number and $|a| < 1$, then
923-
924-
```{math}
925-
:label: gp_sum
926-
927-
\sum_{k=0}^{\infty} a^k =\frac{1}{1-a} = (1 - a)^{-1}
928-
929-
```
930-
931-
For a one-dimensional linear equation $x = ax + b$ where x is unknown we can thus conclude that the solution $x^{*}$ is given by:
932-
933-
$$
934-
x^{*} = \frac{b}{1-a} = \sum_{k=0}^{\infty} a^k b
935-
$$
936-
937-
### Matrix Series
938-
939-
A generalization of this idea exists in the matrix setting.
940-
941-
Consider the system of equations $x = Ax + b$ where $A$ is an $n \times n$
942-
square matrix and $x$ and $b$ are both column vectors in $\mathbb{R}^n$.
943-
944-
Using matrix algebra we can conclude that the solution to this system of equations will be given by:
945-
946-
```{math}
947-
:label: neumann_eqn
948-
949-
x^{*} = (I-A)^{-1}b
950-
951-
```
952-
953-
What guarantees the existence of a unique vector $x^{*}$ that satisfies
954-
{eq}`neumann_eqn` ?
955-
956-
The following is a fundamental result in functional analysis that generalises
957-
{eq}`gp_sum` to a multivariate case.
958-
959-
960-
```{prf:theorem} Neumann Series Lemma
961-
:label: neumann_series_lemma
962-
963-
Let $A$ be a square matrix and let $A^k$ be the $k$-th power of $A$.
964-
965-
Let $r(A)$ be the dominant eigenvector or as it is commonly called the *spectral radius*, defined as $\max_i |\lambda_i|$, where
966-
967-
* $\{\lambda_i\}_i$ is the set of eigenvalues of $A$ and
968-
* $|\lambda_i|$ is the modulus of the complex number $\lambda_i$
969-
970-
Neumann's theorem states the following: If $r(A) < 1$, then $I - A$ is invertible, and
971-
972-
$$
973-
(I - A)^{-1} = \sum_{k=0}^{\infty} A^k
974-
$$
975-
```
976-
977-
We can see the Neumann series lemma in action in the following example.
978-
979-
```{code-cell} ipython3
980-
A = np.array([[0.4, 0.1],
981-
[0.7, 0.2]])
982-
983-
evals, evecs = eig(A) # finding eigenvalues and eigenvectors
984-
985-
r = max(abs(λ) for λ in evals) # compute spectral radius
986-
print(r)
987-
```
988-
989-
The spectral radius $r(A)$ obtained is less than 1.
990-
991-
Thus, we can apply the Neumann Series lemma to find $(I-A)^{-1}$.
992-
993-
```{code-cell} ipython3
994-
I = np.identity(2) #2 x 2 identity matrix
995-
B = I-A
996-
```
997-
998-
```{code-cell} ipython3
999-
B_inverse = np.linalg.inv(B) #direct inverse method
1000-
```
1001-
1002-
```{code-cell} ipython3
1003-
A_sum = np.zeros((2,2)) #power series sum of A
1004-
A_power = I
1005-
for i in range(50):
1006-
A_sum += A_power
1007-
A_power = A_power @ A
1008-
```
1009-
1010-
Let's check equality between the sum and the inverse methods.
1011-
1012-
```{code-cell} ipython3
1013-
np.allclose(A_sum, B_inverse)
1014-
```
1015-
1016-
Although we truncate the infinite sum at $k = 50$, both methods give us the same
1017-
result which illustrates the result of the Neumann Series lemma.
1018-
1019-
## Exercises
1020-
1021-
```{exercise-start} Leontief's Input-Output Model
1022-
:label: eig_ex1
1023-
```
1024-
[Wassily Leontief](https://en.wikipedia.org/wiki/Wassily_Leontief) developed a model of an economy with $n$ sectors producing $n$ different commodities representing the interdependencies of different sectors of an economy.
1025-
1026-
Under this model some of the output is consumed internally by the industries and the rest is consumed by external consumers.
1027-
1028-
We define a simple model with 3 sectors - agriculture, industry, and service.
1029-
1030-
The following table describes how output is distributed within the economy:
1031-
1032-
| | Total output | Agriculture | Industry | Service | Consumer |
1033-
|:-----------:|:------------:|:-----------:|:--------:|:-------:|:--------:|
1034-
| Agriculture | $x_1$ | 0.3$x_1$ | 0.2$x_2$ |0.3$x_3$ | 4 |
1035-
| Industry | $x_2$ | 0.2$x_1$ | 0.4$x_2$ |0.3$x_3$ | 5 |
1036-
| Service | $x_3$ | 0.2$x_1$ | 0.5$x_2$ |0.1$x_3$ | 12 |
1037-
1038-
The first row depicts how agriculture's total output $x_1$ is distributed
1039-
1040-
* $0.3x_1$ is used as inputs within agriculture itself,
1041-
* $0.2x_2$ is used as inputs by the industry sector to produce $x_2$ units
1042-
* $0.3x_3$ is used as inputs by the service sector to produce $x_3$ units and
1043-
* 4 units is the external demand by consumers.
1044-
1045-
We can transform this into a system of linear equations for the 3 sectors as
1046-
given below:
1047-
1048-
$$
1049-
x_1 = 0.3x_1 + 0.2x_2 + 0.3x_3 + 4 \\
1050-
x_2 = 0.2x_1 + 0.4x_2 + 0.3x_3 + 5 \\
1051-
x_3 = 0.2x_1 + 0.5x_2 + 0.1x_3 + 12
1052-
$$
1053-
1054-
This can be transformed into the matrix equation $x = Ax + d$ where
1055-
1056-
$$
1057-
x =
1058-
\begin{bmatrix}
1059-
x_1 \\
1060-
x_2 \\
1061-
x_3
1062-
\end{bmatrix}
1063-
, \; A =
1064-
\begin{bmatrix}
1065-
0.3 & 0.2 & 0.3 \\
1066-
0.2 & 0.4 & 0.3 \\
1067-
0.2 & 0.5 & 0.1
1068-
\end{bmatrix}
1069-
\; \text{and} \;
1070-
d =
1071-
\begin{bmatrix}
1072-
4 \\
1073-
5 \\
1074-
12
1075-
\end{bmatrix}
1076-
$$
1077-
1078-
The solution $x^{*}$ is given by the equation $x^{*} = (I-A)^{-1} d$
1079-
1080-
1. Since $A$ is a nonnegative irreducible matrix, find the Perron-Frobenius eigenvalue of $A$.
1081-
1082-
2. Use the Neumann Series lemma to find the solution $x^{*}$ if it exists.
1083-
1084-
```{exercise-end}
1085-
```
1086-
1087-
```{solution-start} eig_ex1
1088-
:class: dropdown
1089-
```
1090-
1091-
```{code-cell} ipython3
1092-
A = np.array([[0.3, 0.2, 0.3],
1093-
[0.2, 0.4, 0.3],
1094-
[0.2, 0.5, 0.1]])
1095-
1096-
evals, evecs = eig(A)
1097-
1098-
r = max(abs(λ) for λ in evals) #dominant eigenvalue/spectral radius
1099-
print(r)
1100-
```
1101-
1102-
Since we have $r(A) < 1$ we can thus find the solution using the Neumann Series lemma.
1103-
1104-
```{code-cell} ipython3
1105-
I = np.identity(3)
1106-
B = I - A
1107-
1108-
d = np.array([4, 5, 12])
1109-
d.shape = (3,1)
1110-
1111-
B_inv = np.linalg.inv(B)
1112-
x_star = B_inv @ d
1113-
print(x_star)
1114-
```
1115-
1116-
```{solution-end}
1117-
```
836+
This is discussed further later.

0 commit comments

Comments
 (0)