Skip to content

Commit 6913d5a

Browse files
Tom's March 8 edits of svd lecture
1 parent f4a0699 commit 6913d5a

File tree

1 file changed

+98
-48
lines changed

1 file changed

+98
-48
lines changed

lectures/svd_intro.md

Lines changed: 98 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -663,91 +663,141 @@ An important properities of the DMD algorithm that we shall describe soon is tha
663663
664664
665665
666-
An attractive feature of **dynamic mode decomposition** is that we avoid computing the huge matrix $A = X' X^{+}$ of regression coefficients, while under the right conditions, we acquire a good low-rank approximation of $A$ with low computational effort.
666+
### Preliminary Analysis
667667
668+
We'll put basic ideas on the table by starting with the special case in which $r = p$.
668669
669-
### Steps and Explanations
670+
Thus, we retain
671+
all $p$ singular values of $X$.
670672
671-
To construct a DMD, we deploy the following steps:
673+
(Later, we'll retain only $r < p$ of them)
672674
673-
674-
* As mentioned above, though it would be costly, we could compute an $m \times m$ matrix $A$ by solving
675+
When $r = p$, formula
676+
{eq}`eq:Xpinverse` implies that
675677
676-
$$
677-
A = X' V \Sigma^{-1} U^T
678-
$$ (eq:bigAformula)
679678
680-
681-
682-
But we won't do that.
679+
$$
680+
A = X' V \Sigma^{-1} U^T
681+
$$ (eq:Aformbig)
683682
684-
We'll compute the $r$ largest singular values of $X$ and form matrices $\tilde V, \tilde U$ corresponding to those $r$ singular values.
685-
686-
683+
where $V$ is an $\tilde n \times p$ matrix, $\Sigma^{-1}$ is a $p \times p$ matrix, and $U$ is a $p \times m$ matrix,
684+
and where $U^T U = I_p$ and $V V^T = I_m $.
685+
686+
We use the $p$ columns of $U$, and thus the $p$ rows of $U^T$, to define a $p \times 1$ vector $\tilde X_t$ to be used in a lower-dimensional description of the evolution of the system:
687+
688+
689+
$$
690+
\tilde X_t = U^T X_t .
691+
$$
692+
693+
Since $U^T U$ is a $p \times p$ identity matrix, we can recover $X_t$ from $\tilde X_t$ by using
694+
695+
$$
696+
X_t = U \tilde X_t .
697+
$$
698+
699+
The following $p \times p$ transition matrix governs the motion of $\tilde X_t$:
700+
701+
$$
702+
\tilde A = U^T A U = U^T X' V \Sigma^{-1} .
703+
$$ (eq:Atilde0)
704+
705+
Evidently,
706+
707+
$$
708+
\tilde X_{t+1} = \tilde A \tilde X_t
709+
$$ (eq:xtildemotion)
710+
711+
Notice that if we multiply both sides of {eq}`eq:xtildemotion` by $U$
712+
we get
713+
714+
$$
715+
U \tilde X_t = U \tilde A \tilde X_t = U^T \tilde A U^T X_t
716+
$$
717+
718+
which gives
719+
720+
$$
721+
X_{t+1} = A X_t .
722+
$$
723+
724+
725+
726+
727+
728+
### Lower Rank Approximations
729+
730+
731+
An attractive feature of **dynamic mode decomposition** is that we avoid computing the huge matrix $A = X' X^{+}$ of regression coefficients, while with low computational effort, we possibly acquire a good low-rank approximation of $A$.
732+
733+
734+
Instead of using formula {eq}`eq:Aformbig`, we'll compute the $r$ largest singular values of $X$ and form matrices $\tilde V, \tilde U$ corresponding to those $r$ singular values.
687735
688-
We'll then construct a reduced-order system of dimension $r$ by forming an $r \times r$ transition matrix
689-
$\tilde A$ defined by
736+
We'll then construct a reduced-order system of dimension $r$ by forming an $r \times r$ transition matrix
737+
$\tilde A$ redefined by
690738
691-
$$
692-
\tilde A = \tilde U^T A \tilde U
693-
$$ (eq:tildeA_1)
739+
$$
740+
\tilde A = \tilde U^T A \tilde U
741+
$$ (eq:tildeA_1)
694742
695-
The $\tilde A$ matrix governs the dynamics of an $r \times 1$ vector $\tilde X_t $
696-
according to
743+
This redefined $\tilde A$ matrix governs the dynamics of a redefined $r \times 1$ vector $\tilde X_t $
744+
according to
697745
698-
$$
746+
$$
699747
\tilde X_{t+1} = \tilde A \tilde X_t
700-
$$
748+
$$
701749
702-
where an approximation $\check X_t$ to the original $m \times 1$ vector $X_t$ can be acquired by projecting $X_t$ onto a subspace spanned by
750+
where an approximation $\check X_t$ to the original $m \times 1$ vector $X_t$ can be acquired by projecting $X_t$ onto a subspace spanned by
703751
the columns of $\tilde U$:
704752
705-
$$
753+
$$
706754
\check X_t = \tilde U \tilde X_t
707-
$$
755+
$$
708756
709-
We'll provide a formula for $\tilde X_t$ soon.
757+
We'll provide a formula for $\tilde X_t$ soon.
710758
711-
From equation {eq}`eq:tildeA_1` and {eq}`eq:bigAformula` it follows that
759+
From equation {eq}`eq:tildeA_1` and {eq}`eq:Aformbig` it follows that
712760
713761
714-
$$
762+
$$
715763
\tilde A = \tilde U^T X' \tilde V \Sigma^{-1}
716-
$$ (eq:tildeAform)
764+
$$ (eq:tildeAform)
717765
718766
719-
* Construct an eigencomposition of $\tilde A$
767+
Next, we'll Construct an eigencomposition of $\tilde A$
720768
721-
$$
769+
$$
722770
\tilde A W = W \Lambda
723-
$$ (eq:tildeAeigen)
771+
$$ (eq:tildeAeigen)
724772
725-
where $\Lambda$ is a $r \times r$ diagonal matrix of eigenvalues and the columns of $W$ are corresponding eigenvectors
726-
of $\tilde A$. Both $\Lambda$ and $W$ are $r \times r$ matrices.
773+
where $\Lambda$ is a $r \times r$ diagonal matrix of eigenvalues and the columns of $W$ are corresponding eigenvectors
774+
of $\tilde A$.
775+
776+
Both $\Lambda$ and $W$ are $r \times r$ matrices.
727777
728-
* A key step now is to construct the $m \times r$ matrix
778+
A key step now is to construct the $m \times r$ matrix
729779
730-
$$
780+
$$
731781
\Phi = X' \tilde V \tilde \Sigma^{-1} W
732-
$$ (eq:Phiformula)
782+
$$ (eq:Phiformula)
733783
734-
As asserted above, and as we shall soon verify, columns of $\Phi$ are eigenvectors of $A$ corresponding to the largest $r$ eigenvalues of $A$.
784+
As asserted above, and as we shall soon verify, columns of $\Phi$ are eigenvectors of $A$ corresponding to the largest $r$ eigenvalues of $A$.
735785
736786
737787
738-
We can construct an $r \times m$ matrix generalized inverse $\Phi^{+}$ of $\Phi$.
788+
We can construct an $r \times m$ matrix generalized inverse $\Phi^{+}$ of $\Phi$.
739789
740790
741791
742792
743793
744794
745795
746-
* Define an $ r \times 1$ initial vector $b$ of dominant modes by
796+
We define an $ r \times 1$ initial vector $b$ of dominant modes by
747797
748-
$$
798+
$$
749799
b= \Phi^{+} X_1
750-
$$ (eq:bphieqn)
800+
$$ (eq:bphieqn)
751801
752802
753803
@@ -779,7 +829,7 @@ $$
779829
A \phi_i = \lambda_i \phi_i .
780830
$$
781831
782-
Thus, $\phi_i$ is an eigenvector of $A$ corresponding to eigenvalue $\lambda_i$ of $\tilde A$.
832+
Thus, $\phi_i$ is an eigenvector of $A$ that corresponds to eigenvalue $\lambda_i$ of $A$.
783833
784834
785835
@@ -994,15 +1044,15 @@ $$
9941044
or
9951045
9961046
$$
997-
\tilde X_{t+1} = \Lambda \tilde X_t + \tilde \epsilon_{t+1}
1047+
\bar X_{t+1} = \Lambda \bar X_t + \bar \epsilon_{t+1}
9981048
$$ (eq:VARmodes)
9991049
1000-
where $\tilde X_t $ is an $r \times 1$ **mode** and $\tilde \epsilon_{t+1}$ is an $r \times 1$
1050+
where $\bar X_t $ is an $r \times 1$ **mode** and $\bar \epsilon_{t+1}$ is an $r \times 1$
10011051
shock.
10021052
1003-
The $r$ modes $\tilde X_t$ obey the first-order VAR {eq}`eq:VARmodes` in which $\Lambda$ is an $r \times r$ diagonal matrix.
1053+
The $r$ modes $\bar X_t$ obey the first-order VAR {eq}`eq:VARmodes` in which $\Lambda$ is an $r \times r$ diagonal matrix.
10041054
1005-
Note that while $\Lambda$ is diagonal, the contemporaneous covariance matrix of $\tilde \epsilon_{t+1}$ need not be.
1055+
Note that while $\Lambda$ is diagonal, the contemporaneous covariance matrix of $\bar \epsilon_{t+1}$ need not be.
10061056
10071057
10081058
**Remark:** It is permissible for $X_t$ to contain lagged values of observables.

0 commit comments

Comments
 (0)