You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/svd_intro.md
+11-12Lines changed: 11 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -728,11 +728,11 @@ $$ (eq:SVDDMD)
728
728
729
729
where $ U $ is $ m \times p $, $ \Sigma $ is a $ p \times p $ diagonal matrix, and $ V^T $ is a $ p \times \tilde n $ matrix.
730
730
731
-
Here $ p $ is the rank of $ X $, where necessarily $ p \leq \tilde n $.
731
+
Here $ p $ is the rank of $ X $, where necessarily $ p \leq \tilde n $ because we are in the case in which $m > > \tilde n$.
732
732
733
733
734
-
We can use the singular value decomposition {eq}`eq:SVDDMD` efficiently to construct the pseudo-inverse $X^+$
735
-
by exploiting the implication of the following string of equalities:
734
+
Since we are in the $m > > \tilde n$ case, we can use the singular value decomposition {eq}`eq:SVDDMD` efficiently to construct the pseudo-inverse $X^+$
735
+
by recognizing the following string of equalities.
(We described and illustrated a **reduced** singular value decomposition above, and compared it with a **full** singular value decomposition.)
749
-
750
-
751
-
752
-
753
747
Thus, we shall construct a pseudo-inverse $ X^+ $ of $ X $ by using
754
748
a singular value decomposition of $X$ in equation {eq}`eq:SVDDMD` to compute
755
749
@@ -771,6 +765,8 @@ $$
771
765
In addition to doing that, we’ll eventually use **dynamic mode decomposition** to compute a rank $ r $ approximation to $ A $,
772
766
where $ r < p $.
773
767
768
+
**Remark:** We described and illustrated a **reduced** singular value decomposition above, and compared it with a **full** singular value decomposition.
769
+
In our Python code, we'll typically use a reduced SVD.
774
770
775
771
776
772
Next, we turn to two alternative __reduced order__ representations of our dynamic system.
@@ -810,7 +806,7 @@ $$
810
806
\tilde A = U^T \hat A U
811
807
$$
812
808
813
-
We can evidently recover $A$ from
809
+
We can evidently recover $\hat A$ from
814
810
815
811
$$
816
812
\hat A = U \tilde A U^T
@@ -848,7 +844,7 @@ $\Lambda$.
848
844
Note that
849
845
850
846
$$
851
-
A = U \tilde U^T = U W \Lambda W^{-1} U^T
847
+
A = U \tilde A U^T = U W \Lambda W^{-1} U^T
852
848
$$
853
849
854
850
Thus, the systematic part of the $X_t$ dynamics captured by our first-order vector autoregressions are described by
@@ -885,7 +881,10 @@ We can use this representation to predict future $X_t$'s via:
885
881
886
882
$$
887
883
\overline X_{t+1} = U W \Lambda^t W^{-1} U^T X_1
888
-
$$
884
+
$$ (eq:DSSEbookrepr)
885
+
886
+
**Remark** {cite}`DDSE_book` (p. 238) constructs a version of representation {eq}`eq:DSSEbookrepr` in terms of an $m \times p$ matrix $\Phi = UW$.
0 commit comments