You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lectures/svd_intro.md
+98-48Lines changed: 98 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -663,91 +663,141 @@ An important properities of the DMD algorithm that we shall describe soon is tha
663
663
664
664
665
665
666
-
An attractive feature of **dynamic mode decomposition** is that we avoid computing the huge matrix $A = X' X^{+}$ of regression coefficients, while under the right conditions, we acquire a good low-rank approximation of $A$ with low computational effort.
666
+
### Preliminary Analysis
667
667
668
+
We'll put basic ideas on the table by starting with the special case in which $r = p$.
668
669
669
-
### Steps and Explanations
670
+
Thus, we retain
671
+
all $p$ singular values of $X$.
670
672
671
-
To construct a DMD, we deploy the following steps:
673
+
(Later, we'll retain only $r < p$ of them)
672
674
673
-
674
-
* As mentioned above, though it would be costly, we could compute an $m \times m$ matrix $A$ by solving
675
+
When $r = p$, formula
676
+
{eq}`eq:Xpinverse` implies that
675
677
676
-
$$
677
-
A = X' V \Sigma^{-1} U^T
678
-
$$ (eq:bigAformula)
679
678
680
-
681
-
682
-
But we won't do that.
679
+
$$
680
+
A = X' V \Sigma^{-1} U^T
681
+
$$ (eq:Aformbig)
683
682
684
-
We'll compute the $r$ largest singular values of $X$ and form matrices $\tilde V, \tilde U$ corresponding to those $r$ singular values.
685
-
686
-
683
+
where $V$ is an $\tilde n \times p$ matrix, $\Sigma^{-1}$ is a $p \times p$ matrix, and $U$ is a $p \times m$ matrix,
684
+
and where $U^T U = I_p$ and $V V^T = I_m $.
685
+
686
+
We use the $p$ columns of $U$, and thus the $p$ rows of $U^T$, to define a $p \times 1$ vector $\tilde X_t$ to be used in a lower-dimensional description of the evolution of the system:
687
+
688
+
689
+
$$
690
+
\tilde X_t = U^T X_t .
691
+
$$
692
+
693
+
Since $U^T U$ is a $p \times p$ identity matrix, we can recover $X_t$ from $\tilde X_t$ by using
694
+
695
+
$$
696
+
X_t = U \tilde X_t .
697
+
$$
698
+
699
+
The following $p \times p$ transition matrix governs the motion of $\tilde X_t$:
700
+
701
+
$$
702
+
\tilde A = U^T A U = U^T X' V \Sigma^{-1} .
703
+
$$ (eq:Atilde0)
704
+
705
+
Evidently,
706
+
707
+
$$
708
+
\tilde X_{t+1} = \tilde A \tilde X_t
709
+
$$ (eq:xtildemotion)
710
+
711
+
Notice that if we multiply both sides of {eq}`eq:xtildemotion` by $U$
712
+
we get
713
+
714
+
$$
715
+
U \tilde X_t = U \tilde A \tilde X_t = U^T \tilde A U^T X_t
716
+
$$
717
+
718
+
which gives
719
+
720
+
$$
721
+
X_{t+1} = A X_t .
722
+
$$
723
+
724
+
725
+
726
+
727
+
728
+
### Lower Rank Approximations
729
+
730
+
731
+
An attractive feature of **dynamic mode decomposition** is that we avoid computing the huge matrix $A = X' X^{+}$ of regression coefficients, while with low computational effort, we possibly acquire a good low-rank approximation of $A$.
732
+
733
+
734
+
Instead of using formula {eq}`eq:Aformbig`, we'll compute the $r$ largest singular values of $X$ and form matrices $\tilde V, \tilde U$ corresponding to those $r$ singular values.
687
735
688
-
We'll then construct a reduced-order system of dimension $r$ by forming an $r \times r$ transition matrix
689
-
$\tilde A$ defined by
736
+
We'll then construct a reduced-order system of dimension $r$ by forming an $r \times r$ transition matrix
737
+
$\tilde A$ redefined by
690
738
691
-
$$
692
-
\tilde A = \tilde U^T A \tilde U
693
-
$$ (eq:tildeA_1)
739
+
$$
740
+
\tilde A = \tilde U^T A \tilde U
741
+
$$ (eq:tildeA_1)
694
742
695
-
The $\tilde A$ matrix governs the dynamics of an $r \times 1$ vector $\tilde X_t $
696
-
according to
743
+
This redefined $\tilde A$ matrix governs the dynamics of a redefined $r \times 1$ vector $\tilde X_t $
744
+
according to
697
745
698
-
$$
746
+
$$
699
747
\tilde X_{t+1} = \tilde A \tilde X_t
700
-
$$
748
+
$$
701
749
702
-
where an approximation $\check X_t$ to the original $m \times 1$ vector $X_t$ can be acquired by projecting $X_t$ onto a subspace spanned by
750
+
where an approximation $\check X_t$ to the original $m \times 1$ vector $X_t$ can be acquired by projecting $X_t$ onto a subspace spanned by
703
751
the columns of $\tilde U$:
704
752
705
-
$$
753
+
$$
706
754
\check X_t = \tilde U \tilde X_t
707
-
$$
755
+
$$
708
756
709
-
We'll provide a formula for $\tilde X_t$ soon.
757
+
We'll provide a formula for $\tilde X_t$ soon.
710
758
711
-
From equation {eq}`eq:tildeA_1` and {eq}`eq:bigAformula` it follows that
759
+
From equation {eq}`eq:tildeA_1` and {eq}`eq:Aformbig` it follows that
712
760
713
761
714
-
$$
762
+
$$
715
763
\tilde A = \tilde U^T X' \tilde V \Sigma^{-1}
716
-
$$ (eq:tildeAform)
764
+
$$ (eq:tildeAform)
717
765
718
766
719
-
* Construct an eigencomposition of $\tilde A$
767
+
Next, we'll Construct an eigencomposition of $\tilde A$
720
768
721
-
$$
769
+
$$
722
770
\tilde A W = W \Lambda
723
-
$$ (eq:tildeAeigen)
771
+
$$ (eq:tildeAeigen)
724
772
725
-
where $\Lambda$ is a $r \times r$ diagonal matrix of eigenvalues and the columns of $W$ are corresponding eigenvectors
726
-
of $\tilde A$. Both $\Lambda$ and $W$ are $r \times r$ matrices.
773
+
where $\Lambda$ is a $r \times r$ diagonal matrix of eigenvalues and the columns of $W$ are corresponding eigenvectors
774
+
of $\tilde A$.
775
+
776
+
Both $\Lambda$ and $W$ are $r \times r$ matrices.
727
777
728
-
* A key step now is to construct the $m \times r$ matrix
778
+
A key step now is to construct the $m \times r$ matrix
729
779
730
-
$$
780
+
$$
731
781
\Phi = X' \tilde V \tilde \Sigma^{-1} W
732
-
$$ (eq:Phiformula)
782
+
$$ (eq:Phiformula)
733
783
734
-
As asserted above, and as we shall soon verify, columns of $\Phi$ are eigenvectors of $A$ corresponding to the largest $r$ eigenvalues of $A$.
784
+
As asserted above, and as we shall soon verify, columns of $\Phi$ are eigenvectors of $A$ corresponding to the largest $r$ eigenvalues of $A$.
735
785
736
786
737
787
738
-
We can construct an $r \times m$ matrix generalized inverse $\Phi^{+}$ of $\Phi$.
788
+
We can construct an $r \times m$ matrix generalized inverse $\Phi^{+}$ of $\Phi$.
739
789
740
790
741
791
742
792
743
793
744
794
745
795
746
-
* Define an $ r \times 1$ initial vector $b$ of dominant modes by
796
+
We define an $ r \times 1$ initial vector $b$ of dominant modes by
747
797
748
-
$$
798
+
$$
749
799
b= \Phi^{+} X_1
750
-
$$ (eq:bphieqn)
800
+
$$ (eq:bphieqn)
751
801
752
802
753
803
@@ -779,7 +829,7 @@ $$
779
829
A \phi_i = \lambda_i \phi_i .
780
830
$$
781
831
782
-
Thus, $\phi_i$ is an eigenvector of $A$ corresponding to eigenvalue $\lambda_i$ of $\tilde A$.
832
+
Thus, $\phi_i$ is an eigenvector of $A$ that corresponds to eigenvalue $\lambda_i$ of $A$.
0 commit comments