You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/svd_intro.md
+22-26Lines changed: 22 additions & 26 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1047,10 +1047,10 @@ $$
1047
1047
\tilde A =\tilde U^T \hat A \tilde U
1048
1048
$$ (eq:Atildered)
1049
1049
1050
-
Because we are now working with a reduced SVD, so that $\tilde U \tilde U^T \neq I$, we can't recover $\hat A$ from $ \hat A \neq \tilde U \tilde A \tilde U^T$.
1050
+
Because we are now working with a reduced SVD, so that $\tilde U \tilde U^T \neq I$, since $\hat A \neq \tilde U \tilde A \tilde U^T$, we can't simply recover $\hat A$ from $\tilde A$ and $\tilde U$.
1051
1051
1052
1052
1053
-
Nevertheless, hoping for the best, we trudge on and construct an eigendecomposition of what is now a
1053
+
Nevertheless, hoping for the best, we persist and construct an eigendecomposition of what is now a
1054
1054
$p \times p$ matrix $\tilde A$:
1055
1055
1056
1056
$$
@@ -1081,7 +1081,7 @@ That
1081
1081
$ \hat A \tilde \Phi_s \neq \tilde \Phi_s \Lambda $ means, that unlike the corresponding situation in Representation 2, columns of $\tilde \Phi_s = \tilde U W$
1082
1082
are **not** eigenvectors of $\hat A$ corresponding to eigenvalues $\Lambda$.
1083
1083
1084
-
But in the quest for eigenvectors of $\hat A$ that we can compute with a reduced SVD, let's define
1084
+
But in a quest for eigenvectors of $\hat A$ that we *can* compute with a reduced SVD, let's define
1085
1085
1086
1086
$$
1087
1087
\Phi \equiv \hat A \tilde \Phi_s = X' \tilde V \tilde \Sigma^{-1} W
@@ -1090,12 +1090,12 @@ $$
1090
1090
It turns out that columns of $\Phi$ **are** eigenvectors of $\hat A$,
1091
1091
a consequence of a result established by Tu et al. {cite}`tu_Rowley`.
1092
1092
1093
-
To present their result, for convenience we'll drop the tilde $\tilde \cdot$ for $U, V,$ and $\Sigma$
1094
-
and adopt the understanding that they are computed with a reduced SVD.
1093
+
To present their result, for convenience we'll drop the tilde $\tilde \cdot$ above $U, V,$ and $\Sigma$
1094
+
and adopt the understanding that each of them is computed with a reduced SVD.
1095
1095
1096
1096
1097
1097
Thus, we now use the notation
1098
-
that the $m \times p$ matrix is defined as
1098
+
that the $m \times p$ matrix $\Phi$ is defined as
1099
1099
1100
1100
$$
1101
1101
\Phi = X' V \Sigma^{-1} W
@@ -1135,10 +1135,13 @@ Thus, $\phi_i$ is an eigenvector of $\hat A$ that corresponds to eigenvalue $\l
1135
1135
1136
1136
This concludes the proof.
1137
1137
1138
+
Also see {cite}`DDSE_book` (p. 238)
1139
+
1140
+
1141
+
### Decoder of $X$ as linear projection
1138
1142
1139
1143
1140
1144
1141
-
Also see {cite}`DDSE_book` (p. 238)
1142
1145
1143
1146
1144
1147
@@ -1162,16 +1165,6 @@ $$
1162
1165
$$ (eq:decoder102)
1163
1166
1164
1167
1165
-
Here $\check b_t$ is a $p \times 1$ vector of regression coefficients, being component of $\check b$
1166
-
corresponding to column $t$ of the $p \times n$ matrix of regression coefficients
1167
-
1168
-
$$
1169
-
\check b = \Phi^{\dagger} X .
1170
-
$$ (eq:decoder103)
1171
-
1172
-
Furthermore, $\check X_t$ is the $m\times 1$ vector of decoded or projected values of $X_t$ corresponding
1173
-
to column $t$ of the $m \times n$ matrix $X$.
1174
-
1175
1168
Since $\Phi$ has $p$ linearly independent columns, the generalized inverse of $\Phi$ is
1176
1169
1177
1170
$$
@@ -1184,19 +1177,22 @@ $$
1184
1177
\check b = (\Phi^T \Phi)^{-1} \Phi^T X
1185
1178
$$ (eq:checkbform)
1186
1179
1187
-
Here $\check b$ can be recognized as a matrix of least squares regression coefficients of the matrix
1188
-
$X$ on the matrix $\Phi$ and $\Phi \check b$ is the least squares projection of $X$ on $\Phi$.
1180
+
$\check b$ is recognizable as the matrix of least squares regression coefficients of the matrix
1181
+
$X$ on the matrix $\Phi$ and
1182
+
1183
+
$$
1184
+
\check X = \Phi \check b
1185
+
$$
1186
+
1187
+
is the least squares projection of $X$ on $\Phi$.
1189
1188
1190
1189
1191
1190
1192
-
In more detail, by virtue of least-squares projection theory discussed here <https://python-advanced.quantecon.org/orth_proj.html>,
1193
-
we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$
1191
+
By virtue of least-squares projection theory discussed here <https://python-advanced.quantecon.org/orth_proj.html>,
1192
+
we can represent $X$ as the sum of the projection $\check X$ of $X$ on $\Phi$ plus a matrix of errors.
1194
1193
1195
-
$$
1196
-
\check X_t = \Phi \check b_t
1197
-
$$
1198
1194
1199
-
The least squares projection $\check X$ is related to $X$ by
1195
+
To verify this, note that the least squares projection $\check X$ is related to $X$ by
1200
1196
1201
1197
1202
1198
$$
@@ -1289,7 +1285,7 @@ $$
1289
1285
$$ (eq:bphieqn)
1290
1286
1291
1287
1292
-
The literature on DMD sometimes labels components of the basis vector $\check b_t = \Phi^+ X_t \equiv (W \Lambda)^{-1} U^T X_t$ as **exact** DMD nodes.
1288
+
Users of DMD sometimes call components of the basis vector $\check b_t = \Phi^+ X_t \equiv (W \Lambda)^{-1} U^T X_t$ the **exact** DMD modes.
1293
1289
1294
1290
Conditional on $X_t$, we can compute our decoded $\check X_{t+j}, j = 1, 2, \ldots $ from
0 commit comments