Skip to content

Commit ed08cfd

Browse files
Tom's June 13 edits of SVD lecture
1 parent c442e66 commit ed08cfd

File tree

1 file changed

+22
-26
lines changed

1 file changed

+22
-26
lines changed

lectures/svd_intro.md

Lines changed: 22 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1047,10 +1047,10 @@ $$
10471047
\tilde A =\tilde U^T \hat A \tilde U
10481048
$$ (eq:Atildered)
10491049
1050-
Because we are now working with a reduced SVD, so that $\tilde U \tilde U^T \neq I$, we can't recover $\hat A$ from $ \hat A \neq \tilde U \tilde A \tilde U^T$.
1050+
Because we are now working with a reduced SVD, so that $\tilde U \tilde U^T \neq I$, since $\hat A \neq \tilde U \tilde A \tilde U^T$, we can't simply recover $\hat A$ from $\tilde A$ and $\tilde U$.
10511051
10521052
1053-
Nevertheless, hoping for the best, we trudge on and construct an eigendecomposition of what is now a
1053+
Nevertheless, hoping for the best, we persist and construct an eigendecomposition of what is now a
10541054
$p \times p$ matrix $\tilde A$:
10551055
10561056
$$
@@ -1081,7 +1081,7 @@ That
10811081
$ \hat A \tilde \Phi_s \neq \tilde \Phi_s \Lambda $ means, that unlike the corresponding situation in Representation 2, columns of $\tilde \Phi_s = \tilde U W$
10821082
are **not** eigenvectors of $\hat A$ corresponding to eigenvalues $\Lambda$.
10831083
1084-
But in the quest for eigenvectors of $\hat A$ that we can compute with a reduced SVD, let's define
1084+
But in a quest for eigenvectors of $\hat A$ that we *can* compute with a reduced SVD, let's define
10851085
10861086
$$
10871087
\Phi \equiv \hat A \tilde \Phi_s = X' \tilde V \tilde \Sigma^{-1} W
@@ -1090,12 +1090,12 @@ $$
10901090
It turns out that columns of $\Phi$ **are** eigenvectors of $\hat A$,
10911091
a consequence of a result established by Tu et al. {cite}`tu_Rowley`.
10921092
1093-
To present their result, for convenience we'll drop the tilde $\tilde \cdot$ for $U, V,$ and $\Sigma$
1094-
and adopt the understanding that they are computed with a reduced SVD.
1093+
To present their result, for convenience we'll drop the tilde $\tilde \cdot$ above $U, V,$ and $\Sigma$
1094+
and adopt the understanding that each of them is computed with a reduced SVD.
10951095
10961096
10971097
Thus, we now use the notation
1098-
that the $m \times p$ matrix is defined as
1098+
that the $m \times p$ matrix $\Phi$ is defined as
10991099
11001100
$$
11011101
\Phi = X' V \Sigma^{-1} W
@@ -1135,10 +1135,13 @@ Thus, $\phi_i$ is an eigenvector of $\hat A$ that corresponds to eigenvalue $\l
11351135
11361136
This concludes the proof.
11371137
1138+
Also see {cite}`DDSE_book` (p. 238)
1139+
1140+
1141+
### Decoder of $X$ as linear projection
11381142
11391143
11401144
1141-
Also see {cite}`DDSE_book` (p. 238)
11421145
11431146
11441147
@@ -1162,16 +1165,6 @@ $$
11621165
$$ (eq:decoder102)
11631166
11641167
1165-
Here $\check b_t$ is a $p \times 1$ vector of regression coefficients, being component of $\check b$
1166-
corresponding to column $t$ of the $p \times n$ matrix of regression coefficients
1167-
1168-
$$
1169-
\check b = \Phi^{\dagger} X .
1170-
$$ (eq:decoder103)
1171-
1172-
Furthermore, $\check X_t$ is the $m\times 1$ vector of decoded or projected values of $X_t$ corresponding
1173-
to column $t$ of the $m \times n$ matrix $X$.
1174-
11751168
Since $\Phi$ has $p$ linearly independent columns, the generalized inverse of $\Phi$ is
11761169
11771170
$$
@@ -1184,19 +1177,22 @@ $$
11841177
\check b = (\Phi^T \Phi)^{-1} \Phi^T X
11851178
$$ (eq:checkbform)
11861179
1187-
Here $\check b$ can be recognized as a matrix of least squares regression coefficients of the matrix
1188-
$X$ on the matrix $\Phi$ and $\Phi \check b$ is the least squares projection of $X$ on $\Phi$.
1180+
$\check b$ is recognizable as the matrix of least squares regression coefficients of the matrix
1181+
$X$ on the matrix $\Phi$ and
1182+
1183+
$$
1184+
\check X = \Phi \check b
1185+
$$
1186+
1187+
is the least squares projection of $X$ on $\Phi$.
11891188
11901189
11911190
1192-
In more detail, by virtue of least-squares projection theory discussed here <https://python-advanced.quantecon.org/orth_proj.html>,
1193-
we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$
1191+
By virtue of least-squares projection theory discussed here <https://python-advanced.quantecon.org/orth_proj.html>,
1192+
we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$ plus a matrix of errors.
11941193
1195-
$$
1196-
\check X_t = \Phi \check b_t
1197-
$$
11981194
1199-
The least squares projection $\check X$ is related to $X$ by
1195+
To verify this, note that the least squares projection $\check X$ is related to $X$ by
12001196
12011197
12021198
$$
@@ -1289,7 +1285,7 @@ $$
12891285
$$ (eq:bphieqn)
12901286
12911287
1292-
The literature on DMD sometimes labels components of the basis vector $\check b_t = \Phi^+ X_t \equiv (W \Lambda)^{-1} U^T X_t$ as **exact** DMD nodes.
1288+
Users of DMD sometimes call components of the basis vector $\check b_t = \Phi^+ X_t \equiv (W \Lambda)^{-1} U^T X_t$ the **exact** DMD modes.
12931289
12941290
Conditional on $X_t$, we can compute our decoded $\check X_{t+j}, j = 1, 2, \ldots $ from
12951291
either

0 commit comments

Comments
 (0)