You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/eigen_II.md
+76-17Lines changed: 76 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ kernelspec:
11
11
name: python3
12
12
---
13
13
14
-
# Eigenvalues and Eigenvectors of Nonnegative matrices
14
+
# Theorems of Nonnegative Matrices and Eigenvalues
15
15
16
16
```{index} single: Eigenvalues and Eigenvectors
17
17
```
@@ -20,39 +20,34 @@ kernelspec:
20
20
:depth: 2
21
21
```
22
22
23
+
In this lecture we will begin with the basic properties of nonnegative matrices.
24
+
25
+
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
26
+
27
+
We will use the following imports:
28
+
23
29
```{code-cell} ipython3
24
30
import matplotlib.pyplot as plt
25
31
import numpy as np
26
32
from numpy.linalg import eig
27
33
```
28
34
35
+
## Nonnegative Matrices
36
+
29
37
Often, in economics, the matrix that we are dealing with is nonnegative.
30
38
31
39
Nonnegative matrices have several special and useful properties.
32
40
33
41
In this section we discuss some of them --- in particular, the connection
34
42
between nonnegativity and eigenvalues.
35
43
36
-
37
-
## Nonnegative Matrices
38
-
39
44
Let $a^{k}_{ij}$ be element $(i,j)$ of $A^k$.
40
45
41
46
An $n \times m$ matrix $A$ is called **nonnegative** if every element of $A$
42
47
is nonnegative, i.e., $a_{ij} \geq 0$ for every $i,j$.
43
48
44
49
We denote this as $A \geq 0$.
45
50
46
-
### Primitive Matrices
47
-
48
-
Let $A$ be a square nonnegative matrix and let $A^k$ be the $k^{th}$ power of A.
49
-
50
-
A matrix is consisdered **primitive** if there exists a $k \in \mathbb{N}$ such that $A^k$ is everywhere positive.
51
-
52
-
It means that $A$ is called primitive if there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$ for *all* $(i,j)$.
53
-
54
-
This concept is closely related to irreducible matrices.
55
-
56
51
### Irreducible Matrices
57
52
58
53
We have (informally) introduced irreducible matrices in the Markov chain lecture (TODO: link to Markov chain lecture).
@@ -61,8 +56,6 @@ Here we will introduce this concept formally.
61
56
62
57
$A$ is called **irreducible** if for *each* $(i,j)$ there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$.
63
58
64
-
We can see that if a matrix is primitive, then it implies the matrix is irreducible.
65
-
66
59
A matrix $A$ that is not irreducible is called reducible.
67
60
68
61
Here are some examples to illustrate this further.
@@ -74,9 +67,68 @@ Here are some examples to illustrate this further.
74
67
3. $A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ is reducible since $A^k = A$ for all $k \geq 0$ and thus
75
68
$a^{k}_{12},a^{k}_{21} = 0$ for all $k \geq 0$.
76
69
77
-
### Left and Right Eigenvectors
70
+
### Primitive Matrices
71
+
72
+
Let $A$ be a square nonnegative matrix and let $A^k$ be the $k^{th}$ power of $A$.
73
+
74
+
A matrix is consisdered **primitive** if there exists a $k \in \mathbb{N}$ such that $A^k$ is everywhere positive.
75
+
76
+
It means that $A$ is called primitive if there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$ for *all* $(i,j)$.
77
+
78
+
We can see that if a matrix is primitive, then it implies the matrix is irreducible.
79
+
80
+
This is becuase if there exists an $A^k$ such that $a^{k}_{ij} > 0$ for all $(i,j)$, then it guarantees the same property for ${k+1}^th, {k+2}^th ... {k+n}^th$ iterations.
81
+
82
+
In other words, a primitive matrix is both irreducible and aperiodical as aperiodicity requires the a state to be visited with a guarantee of returning to itself after certain amount of iterations.
83
+
84
+
### Left Eigenvectors
85
+
86
+
We have previously discussed right (ordinary) eigenvectors $Av = \lambda v$.
87
+
88
+
Here we introduce left eigenvectors.
89
+
90
+
Left eigenvectors will play important roles in what follows, including that of stochastic steady states for dynamic models under a Markov assumption.
78
91
92
+
We will talk more about this later, but for now, let's define left eigenvectors.
79
93
94
+
A vector $\varepsilon$ is called a left eigenvector of $A$ if $\varepsilon$ is an eigenvector of $A^T$.
95
+
96
+
In other words, if $\varepsilon$ is a left eigenvector of matrix A, then $A^T \varepsilon = \lambda \varepsilon$, where $\lambda$ is the eigenvalue associated with the left eigenvector $v$.
Let's build our intuition for the theorem using a simple example.
166
+
167
+
109
168
In fact, we have already seen Perron-Frobenius theorem in action before in the exercise (TODO: link to Markov chain exercise)
110
169
111
170
In the exercise, we stated that the convegence rate is determined by the spectral gap, the difference between the largest and the second largest eigenvalue.
0 commit comments