Skip to content

Commit 7b05136

Browse files
committed
update graphviz and add more content to eigen2
1 parent 9fc98a2 commit 7b05136

File tree

3 files changed

+78
-17
lines changed

3 files changed

+78
-17
lines changed

lectures/eigen_II.md

Lines changed: 76 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ kernelspec:
1111
name: python3
1212
---
1313

14-
# Eigenvalues and Eigenvectors of Nonnegative matrices
14+
# Theorems of Nonnegative Matrices and Eigenvalues
1515

1616
```{index} single: Eigenvalues and Eigenvectors
1717
```
@@ -20,39 +20,34 @@ kernelspec:
2020
:depth: 2
2121
```
2222

23+
In this lecture we will begin with the basic properties of nonnegative matrices.
24+
25+
Then we will explore the Perron-Frobenius Theorem and the Neumann Series Lemma, and connect them to applications in Markov chains and networks.
26+
27+
We will use the following imports:
28+
2329
```{code-cell} ipython3
2430
import matplotlib.pyplot as plt
2531
import numpy as np
2632
from numpy.linalg import eig
2733
```
2834

35+
## Nonnegative Matrices
36+
2937
Often, in economics, the matrix that we are dealing with is nonnegative.
3038

3139
Nonnegative matrices have several special and useful properties.
3240

3341
In this section we discuss some of them --- in particular, the connection
3442
between nonnegativity and eigenvalues.
3543

36-
37-
## Nonnegative Matrices
38-
3944
Let $a^{k}_{ij}$ be element $(i,j)$ of $A^k$.
4045

4146
An $n \times m$ matrix $A$ is called **nonnegative** if every element of $A$
4247
is nonnegative, i.e., $a_{ij} \geq 0$ for every $i,j$.
4348

4449
We denote this as $A \geq 0$.
4550

46-
### Primitive Matrices
47-
48-
Let $A$ be a square nonnegative matrix and let $A^k$ be the $k^{th}$ power of A.
49-
50-
A matrix is consisdered **primitive** if there exists a $k \in \mathbb{N}$ such that $A^k$ is everywhere positive.
51-
52-
It means that $A$ is called primitive if there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$ for *all* $(i,j)$.
53-
54-
This concept is closely related to irreducible matrices.
55-
5651
### Irreducible Matrices
5752

5853
We have (informally) introduced irreducible matrices in the Markov chain lecture (TODO: link to Markov chain lecture).
@@ -61,8 +56,6 @@ Here we will introduce this concept formally.
6156

6257
$A$ is called **irreducible** if for *each* $(i,j)$ there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$.
6358

64-
We can see that if a matrix is primitive, then it implies the matrix is irreducible.
65-
6659
A matrix $A$ that is not irreducible is called reducible.
6760

6861
Here are some examples to illustrate this further.
@@ -74,9 +67,68 @@ Here are some examples to illustrate this further.
7467
3. $A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ is reducible since $A^k = A$ for all $k \geq 0$ and thus
7568
$a^{k}_{12},a^{k}_{21} = 0$ for all $k \geq 0$.
7669

77-
### Left and Right Eigenvectors
70+
### Primitive Matrices
71+
72+
Let $A$ be a square nonnegative matrix and let $A^k$ be the $k^{th}$ power of $A$.
73+
74+
A matrix is consisdered **primitive** if there exists a $k \in \mathbb{N}$ such that $A^k$ is everywhere positive.
75+
76+
It means that $A$ is called primitive if there is an integer $k \geq 0$ such that $a^{k}_{ij} > 0$ for *all* $(i,j)$.
77+
78+
We can see that if a matrix is primitive, then it implies the matrix is irreducible.
79+
80+
This is becuase if there exists an $A^k$ such that $a^{k}_{ij} > 0$ for all $(i,j)$, then it guarantees the same property for ${k+1}^th, {k+2}^th ... {k+n}^th$ iterations.
81+
82+
In other words, a primitive matrix is both irreducible and aperiodical as aperiodicity requires the a state to be visited with a guarantee of returning to itself after certain amount of iterations.
83+
84+
### Left Eigenvectors
85+
86+
We have previously discussed right (ordinary) eigenvectors $Av = \lambda v$.
87+
88+
Here we introduce left eigenvectors.
89+
90+
Left eigenvectors will play important roles in what follows, including that of stochastic steady states for dynamic models under a Markov assumption.
7891

92+
We will talk more about this later, but for now, let's define left eigenvectors.
7993

94+
A vector $\varepsilon$ is called a left eigenvector of $A$ if $\varepsilon$ is an eigenvector of $A^T$.
95+
96+
In other words, if $\varepsilon$ is a left eigenvector of matrix A, then $A^T \varepsilon = \lambda \varepsilon$, where $\lambda$ is the eigenvalue associated with the left eigenvector $v$.
97+
98+
This hints on how to compute left eigenvectors
99+
100+
```{code-cell} ipython3
101+
# Define a sample matrix
102+
A = np.array([[3, 2],
103+
[1, 4]])
104+
105+
# Compute right eigenvectors and eigenvalues
106+
right_eigenvalues, right_eigenvectors = np.linalg.eig(A)
107+
108+
# Compute left eigenvectors and eigenvalues
109+
left_eigenvalues, left_eigenvectors = np.linalg.eig(A.T)
110+
111+
# Transpose left eigenvectors for comparison (because they are returned as column vectors)
112+
left_eigenvectors = left_eigenvectors.T
113+
114+
print("Matrix A:")
115+
print(A)
116+
print("\nRight Eigenvalues:")
117+
print(right_eigenvalues)
118+
print("\nRight Eigenvectors:")
119+
print(right_eigenvectors)
120+
print("\nLeft Eigenvalues:")
121+
print(left_eigenvalues)
122+
print("\nLeft Eigenvectors:")
123+
print(left_eigenvectors)
124+
left_eigenvectors @ right_eigenvectors
125+
```
126+
127+
Note that the eigenvalues for both left and right eigenvectors are the same, but the eigenvectors themselves are different.
128+
129+
We can then take transpose to obtain $A^T \varepsilon = \lambda \varepsilon$ and obtain $\varepsilon^T A= \lambda \varepsilon^T$.
130+
131+
This is a more common expression and where the name left eigenvectors originates.
80132

81133
### The Perron-Frobenius Theorem
82134

@@ -101,11 +153,18 @@ Moreover if $A$ is also irreducible then,
101153
4. the eigenvector $v$ associated with the eigenvalue $r(A)$ is strictly positive.
102154
5. there exists no other positive eigenvector $v$ (except scalar multiples of v) associated with $r(A)$.
103155
156+
If $A$ is primitive then,
157+
6. the inequality $|\lambda| \leq r(A)$ is strict for all eigenvalues 𝜆 of 𝐴 distinct from 𝑟(𝐴), and
158+
7. with $e$ and $\varepsilon$ normalized so that the inner product of $\varepsilon$ and $e = 1$, we have
159+
$ r(A)^{-m} A^m$ converges to $\varepsilon^{\top}$ when $m \rightarrow \infty$
104160
```
105161

106162
(This is a relatively simple version of the theorem --- for more details see
107163
[here](https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem)).
108164

165+
Let's build our intuition for the theorem using a simple example.
166+
167+
109168
In fact, we have already seen Perron-Frobenius theorem in action before in the exercise (TODO: link to Markov chain exercise)
110169

111170
In the exercise, we stated that the convegence rate is determined by the spectral gap, the difference between the largest and the second largest eigenvalue.

lectures/markov_chains_I.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
2121
:tags: [hide-output]
2222
2323
!pip install quantecon
24+
!pip install graphviz
2425
```
2526

2627
+++ {"user_expressions": []}

lectures/markov_chains_II.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
2121
:tags: [hide-output]
2222
2323
!pip install quantecon
24+
!pip install graphviz
2425
```
2526

2627
+++ {"user_expressions": []}

0 commit comments

Comments
 (0)