Skip to content

Commit 7c57b55

Browse files
Tom's second March 28 edits of svd lecture
1 parent 5f9e895 commit 7c57b55

File tree

1 file changed

+11
-12
lines changed

1 file changed

+11
-12
lines changed

lectures/svd_intro.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -728,11 +728,11 @@ $$ (eq:SVDDMD)
728728
729729
where $ U $ is $ m \times p $, $ \Sigma $ is a $ p \times p $ diagonal matrix, and $ V^T $ is a $ p \times \tilde n $ matrix.
730730
731-
Here $ p $ is the rank of $ X $, where necessarily $ p \leq \tilde n $.
731+
Here $ p $ is the rank of $ X $, where necessarily $ p \leq \tilde n $ because we are in the case in which $m > > \tilde n$.
732732
733733
734-
We can use the singular value decomposition {eq}`eq:SVDDMD` efficiently to construct the pseudo-inverse $X^+$
735-
by exploiting the implication of the following string of equalities:
734+
Since we are in the $m > > \tilde n$ case, we can use the singular value decomposition {eq}`eq:SVDDMD` efficiently to construct the pseudo-inverse $X^+$
735+
by recognizing the following string of equalities.
736736
737737
$$
738738
\begin{aligned}
@@ -744,12 +744,6 @@ X^{+} & = (X^T X)^{-1} X^T \\
744744
\end{aligned}
745745
$$ (eq:efficientpseudoinverse)
746746
747-
748-
(We described and illustrated a **reduced** singular value decomposition above, and compared it with a **full** singular value decomposition.)
749-
750-
751-
752-
753747
Thus, we shall construct a pseudo-inverse $ X^+ $ of $ X $ by using
754748
a singular value decomposition of $X$ in equation {eq}`eq:SVDDMD` to compute
755749
@@ -771,6 +765,8 @@ $$
771765
In addition to doing that, we’ll eventually use **dynamic mode decomposition** to compute a rank $ r $ approximation to $ A $,
772766
where $ r < p $.
773767
768+
**Remark:** We described and illustrated a **reduced** singular value decomposition above, and compared it with a **full** singular value decomposition.
769+
In our Python code, we'll typically use a reduced SVD.
774770
775771
776772
Next, we turn to two alternative __reduced order__ representations of our dynamic system.
@@ -810,7 +806,7 @@ $$
810806
\tilde A = U^T \hat A U
811807
$$
812808
813-
We can evidently recover $A$ from
809+
We can evidently recover $\hat A$ from
814810
815811
$$
816812
\hat A = U \tilde A U^T
@@ -848,7 +844,7 @@ $\Lambda$.
848844
Note that
849845
850846
$$
851-
A = U \tilde U^T = U W \Lambda W^{-1} U^T
847+
A = U \tilde A U^T = U W \Lambda W^{-1} U^T
852848
$$
853849
854850
Thus, the systematic part of the $X_t$ dynamics captured by our first-order vector autoregressions are described by
@@ -885,7 +881,10 @@ We can use this representation to predict future $X_t$'s via:
885881
886882
$$
887883
\overline X_{t+1} = U W \Lambda^t W^{-1} U^T X_1
888-
$$
884+
$$ (eq:DSSEbookrepr)
885+
886+
**Remark** {cite}`DDSE_book` (p. 238) constructs a version of representation {eq}`eq:DSSEbookrepr` in terms of an $m \times p$ matrix $\Phi = UW$.
887+
Also, see Tu et al. {cite}`tu_Rowley`.
889888
890889
891890

0 commit comments

Comments
 (0)